LCN - Laboratory of Computational Neuroscience

Temporal Processing in the auditory system

Our auditory system may be divided into 3 levels:

In the periphery, the acoustical signal is converted into neural impulses which are transmitted to the brain. In the brainstem and midbrain areas, these raw neural impulses are analysed and behaviorally important parameters are extracted, such as the location of the sound source. In animals that have little or no neocortex, like lizards or frogs, the information extracted in these areas is sufficient to induce stereotyped behavioral responses to mating calls and prey or predator sounds. In mammals, the auditory areas of the neocortex make it possible for an animal to have a more refined perception of sounds, and therefore more complex responses are possible.

A classical approach to modeling the auditory system has been to consider the periphery as a Fourier transform followed by a number of bandpass filters, and to view the function of entire lower auditory system as being a spectrum estimator. However, this is not accurate, as it implies that we shouldn't be able to tell the difference between a short (asymmetric) sound played forward or backward -- but we can! The auditory periphery "preprocesses" the signal, and as a result we hear different sounds.

Similarly, from the point of view of the cortex, the brainstem and midbrain areas also preprocess the neural auditory signal. A complete understanding of the auditory system is impossible without better understanding these areas. This is made difficult by their architectural complexity: physiologists have identified at least thirty different heavily interconnected subdivisions. So far, only a few have clearly identified functions (such as binaural localization). Multiple pathways convey neural impulses to higher levels; there are many different types of cells, with a range of biophysical properties; and there are too many cells for us to be able to model very much.

In contrast to standard filter bank approaches, cochlear models induce specific non-linearities. Currently we test whether these non-linearities are useful in the context of speech recognition.

Collaborators:

A reference to one earlier study of fast processing in the auditory system:

Gerstner W, Kempter R, van Hemmen JL, and Wagner H (1996)
A neuronal learning rule for sub-millisecond temporal coding.
Nature, 383 :76-78


Please send comments on this page to: [email protected]

Go to LCN - DI - EPFL

LCN - Laboratory of Computational Neuroscience