The most frequent complaint made by people with hearing loss is difficulty understanding speech in noisy environments. One of the most effective ways to address this problem is to watch the talker’s face while he or she speaks. The process of deriving information from watching the movement of the lips, jaw, and other facial gestures of a talker, is known as speechreading. When speechreading and hearing are combined, the result is an extremely robust speech signal that is greatly resistant to noise and hearing loss. The aim of this area of research is to identify the various perceptual processes involved in auditory-visual speech perception, to determine the abilities of individual patients to carry out these processes successfully, and to design intervention strategies incorporating modern signal processing technologies and training techniques to remedy any deficiencies that may be found.
For our hearing-impaired patients, this work has direct application in terms of understanding the importance of different listening strategies, such as positioning yourself so that you can see the talker, and learning to interpret the movements of the mouth and other facial structures in combination with what you can hear. This work also has impact on the development of new assistive listening devices such as advanced hearing aids, in terms of how a new hearing aid feature might work under audio-alone listening conditions as well as under audiovisual listening conditions.