When listening to speech sounds, for example during a conversation, our brain acts as an “acoustics-to-phonetics converter”: it must compute some acoustic features from the input sound wave to break it down into a stream of discrete linguistic units, such as phonemes, syllables, and words. These acoustic cues are extracted effortlessly and very rapidly from the incoming sound. Nevertheless, this apparent simplicity hides a complex series of auditory and cognitive mechanisms which remain largely unknown. The present project will explore the decoding of speech by the human auditory system, at the interface between acoustics and phonetics.
We will use the reverse correlation technique to map the phonemic representations used by normal-hearing listeners. The fast-ACI method (developed as a MATLAB toolbox by our group: https://github.com/aosses-tue/fastACI) relies on a stimulus-response model, fitted using advanced machine learning techniques, to produce an instant picture of a participant’s listening strategy in a given auditory task. This method will be applied to unveal the speech cues used by French listeners to categorize phonemes (e.g. /aba/ vs. /ada/ categorization).
The overarching research hypothesis that will be tested in this project is that phoneme comprehension by the human brain is far from a simple one-to-one association between acoustic utterances and phonological representations. Rather, it is a complex and dynamic process which combines multiple acoustic cues into one single phonetic percept.