Current functional models of the auditory system process the spectral and temporal dimensions of sounds independently: cochlear filters first decompose the signals into different frequency bands, whose temporal information are processed independently through modulation filtering. However, animal and human vocalizations contain spectro-temporally oriented patterns of energy (e.g.formant transitions), and electrophysiological studies have identified central auditory neurons that are sensitive to specific spectro-temporal directions (i.e. non-separable). This suggests that the auditory system has a dedicated machinery to integrate/combine the temporal information present in different frequency channels, to form auditory objects and allow robust speech perception. Yet, current information regarding how this mechanism operates at a perceptual level remain scarce. The main objective of this internship will be to design and conduct psychophysical experiments based on spectro-temporally modulated signals (https://doi.org/10.1177/233121652097802) to better understand the characteristics of this central integration process, and therefore guide its computational implementation in current models. Furthermore, we will examine whether interindividual variability in across-frequency integration capacities could help understand why individuals with normal-hearing greatly vary in their ability to understand speech in noise. This internship will take place in the STMS Lab (Sciences et Technologies de la Musique et du Son) at Ircam (Institut de Recherche et Coordination Acoustique/Musique) in Paris.