Incorporating behavioral and sensory context into spectro-temporal models of auditory encoding

Research output: Contribution to journalReview articlepeer-review

16 Scopus citations

Abstract

For several decades, auditory neuroscientists have used spectro-temporal encoding models to understand how neurons in the auditory system represent sound. Derived from early applications of systems identification tools to the auditory periphery, the spectro-temporal receptive field (STRF) and more sophisticated variants have emerged as an efficient means of characterizing representation throughout the auditory system. Most of these encoding models describe neurons as static sensory filters. However, auditory neural coding is not static. Sensory context, reflecting the acoustic environment, and behavioral context, reflecting the internal state of the listener, can both influence sound-evoked activity, particularly in central auditory areas. This review explores recent efforts to integrate context into spectro-temporal encoding models. It begins with a brief tutorial on the basics of estimating and interpreting STRFs. Then it describes three recent studies that have characterized contextual effects on STRFs, emerging over a range of timescales, from many minutes to tens of milliseconds. An important theme of this work is not simply that context influences auditory coding, but also that contextual effects span a large continuum of internal states. The added complexity of these context-dependent models introduces new experimental and theoretical challenges that must be addressed in order to be used effectively. Several new methodological advances promise to address these limitations and allow the development of more comprehensive context-dependent models in the future.

Original languageEnglish (US)
Pages (from-to)107-123
Number of pages17
JournalHearing Research
Volume360
DOIs
StatePublished - Mar 2018

ASJC Scopus subject areas

  • Sensory Systems

Fingerprint

Dive into the research topics of 'Incorporating behavioral and sensory context into spectro-temporal models of auditory encoding'. Together they form a unique fingerprint.

Cite this