Augmentative communication technologies using naturalistic data and personalized machine learning
Understanding nonverbal communication
More than 1 million people in the United States are nonverbal or minimally verbal, including people with autism, Down Syndrome, and other genetic disorders. These individuals experience stress, frustration, and isolation when communicating in a society largely constructed around typical verbal speech. Yet, through nonspeech vocalizations, they express rich affective and communicative information. Their parents or primary caregivers are able to “translate” these sounds. This project is developing a full-feedback augmentative-communication system, using a database of vocalizations from nonverbal and minimally verbal individuals and machine learning algorithms, to “translate” non-speech sounds to speech. It would enhance communicative exchanges between nonverbal or minimally verbal individuals and the wider community.