Project Details
Description
Project Summary
Throughout life, humans and other animals learn statistical regularities in the acoustic environment and adapt
their hearing to emphasize the elements of sound that are important for behavioral decisions. Using these
abilities, normal-hearing humans are able to perceive important sounds in crowded noisy environments and
understand the speech of individuals the first time they meet. However, patients with peripheral hearing loss or
central processing disorders often have problems hearing in these challenging settings, even when sound is
amplified above perceptual threshold. This study seeks to characterize how two major areas in the brain's
auditory network, auditory cortex and midbrain inferior colliculus, establish an interface between incoming
auditory signals and the internal brain states that select information appropriate to the current behavioral
context. Single-unit neural activity will be recorded from both of these brain areas in awake ferrets during the
presentation of complex naturalistic sounds that mimic the acoustic environment encountered in the real world.
Internal brain state will be controlled by selective attention to specific sound features in these complex stimuli.
Changes in stimulus-evoked neural activity as attention shifts among sound features will be measured to
identify interactions between internal state and incoming sensory signals in these different areas.
Previous work has identified a large corticofugal projection from auditory cortex to inferior colliculus that could
produce task-dependent changes in selectivity in inferior colliculus. This study will test the role of these
corticofugal projections by optogenetic inactivation of auditory cortex during recordings from inferior colliculus.
Selective inactivation of specific pathways will characterize how the network of brain areas works together to
produce effective auditory behaviors.
Computational modeling tools will be used to determine, from an algorithmic perspective, how neurons encode
information about the natural stimuli and how this encoding changes as attention is shifted between features.
Data collected during behavior will be used to develop models that combine bottom-up sensory processing and
top-down behavioral control. This computational approach builds on classic characterizations of neural
stimulus-response relationships using spectro-temporal receptive field models. New models will be developed
that incorporate behavioral state variables and nonlinear biological circuit elements into established model
frameworks. Together, these studies will provide new insight into the computational strategies used by the
behaving brain to process complex sounds in real-world contexts.
Throughout life, humans and other animals learn statistical regularities in the acoustic environment and adapt
their hearing to emphasize the elements of sound that are important for behavioral decisions. Using these
abilities, normal-hearing humans are able to perceive important sounds in crowded noisy environments and
understand the speech of individuals the first time they meet. However, patients with peripheral hearing loss or
central processing disorders often have problems hearing in these challenging settings, even when sound is
amplified above perceptual threshold. This study seeks to characterize how two major areas in the brain's
auditory network, auditory cortex and midbrain inferior colliculus, establish an interface between incoming
auditory signals and the internal brain states that select information appropriate to the current behavioral
context. Single-unit neural activity will be recorded from both of these brain areas in awake ferrets during the
presentation of complex naturalistic sounds that mimic the acoustic environment encountered in the real world.
Internal brain state will be controlled by selective attention to specific sound features in these complex stimuli.
Changes in stimulus-evoked neural activity as attention shifts among sound features will be measured to
identify interactions between internal state and incoming sensory signals in these different areas.
Previous work has identified a large corticofugal projection from auditory cortex to inferior colliculus that could
produce task-dependent changes in selectivity in inferior colliculus. This study will test the role of these
corticofugal projections by optogenetic inactivation of auditory cortex during recordings from inferior colliculus.
Selective inactivation of specific pathways will characterize how the network of brain areas works together to
produce effective auditory behaviors.
Computational modeling tools will be used to determine, from an algorithmic perspective, how neurons encode
information about the natural stimuli and how this encoding changes as attention is shifted between features.
Data collected during behavior will be used to develop models that combine bottom-up sensory processing and
top-down behavioral control. This computational approach builds on classic characterizations of neural
stimulus-response relationships using spectro-temporal receptive field models. New models will be developed
that incorporate behavioral state variables and nonlinear biological circuit elements into established model
frameworks. Together, these studies will provide new insight into the computational strategies used by the
behaving brain to process complex sounds in real-world contexts.
Status | Finished |
---|---|
Effective start/end date | 2/1/16 → 1/31/21 |
Funding
- National Institutes of Health: $327,250.00
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.