Location: (New York, NY)
Personal Research Web Page: http://eniac.cs.qc.cuny.edu/matt/
Keywords: American Sign Language, animation, assistive technology for people with disabilities, accessibility, computational linguistics, motion-capture, natural language processing, machine translation, sign language, text simplification, readability.
Posted on: Tuesday, May 10th, 2011
Broad Research Area: AI / Machine Learning / Robotics / Vision, Computer Science Education / Educational Technology, HCI / CSCW, Other, Social Computing / Social Informatics
The Linguistic and Assistive Technologies Laboratory (LATLab) at The City University of New York (CUNY), Queens College, conducts research in computational linguistics and human-computer interaction with a primary focus on accessibility applications and assistive technology for people with disabilities. In particular, we study the design of computer technology to address language and literacy impairments of people with disabilities – with a focus on two groups: people who have intellectual disabilities (ID) and people who are deaf.
Animations of American Sign Language for People who are Deaf:
Our research at CUNY has focused on sign language animation technologies for people who are deaf. American Sign Language (ASL) is a primary means of communication for one-half million people in the U.S, and it is a distinct language from English – with its own unique word-order, grammar, and vocabulary. A majority of deaf high school graduates in the U.S. have only a fourth-grade English reading level or below, yet many of these adults have sophisticated fluency in ASL. Therefore, software that can present information in the form of ASL animations would improve these individuals’ access to websites, communication, and information. Using our motion-capture recording studio, we are collecting a corpus of ASL performances, linguistically annotating them (with the help of a team of native ASL signers), and analyzing this data to create models to underlie ASL animation technologies (to make them more natural-moving and understandable for deaf viewers). We conduct experimental studies at the laboratory on a regular basis in which native ASL signers evaluate animations synthesized using alternative models. We have studies models of speed/timing, use of signing space, verb inflection, and we are beginning a project on facial expression in ASL. We are also interested in issues relating to linguistic generation of ASL and English-to-ASL machine translation. Further, we are interested in studying other forms of signing communication, including various forms of Signed English used by people who are deaf or hard-of-hearing in the U.S.
Text Readability Detection / Text Simplification for People with Intellectual Disabilities:
Many people with ID have limited English literacy and could benefit from software that could help them automatically identify information sources or websites that are at an appropriate level of difficulty or that could automatically simplify complex texts. Based on linguistic features of a text that can be automatically calculated through NLP software (the part-of-speech of different words, the syntactic parse-tree of the sentences, etc.), we are designing software to assign a difficulty score to a text to indicate whether it would be accessible for an adult with ID to read. We are also experimenting with various experimental designs to gather ground-truth data about how difficult-to-read texts are for these users; this is non-trivial because it can be difficult for these users to participate in a traditional comprehension experiments.
We are interested in other projects related to applications of computational linguistics or human computer interaction to accessibility or assistive technology for people with disabilities.
Email is the best way to contact me: matt(at)cs.qc.cuny.edu.