TY - JOUR
T1 - Incorporating behavioral and sensory context into spectro-temporal models of auditory encoding
AU - David, Stephen V.
N1 - Funding Information:
This work was supported by grants from the NIH ( R01 DC014950 ), DARPA ( D15 AP00101 ), and NSF ( PHY11-25915 ). Thank you to Daniela Saderi and two anonymous reviewers for helpful comments on the manuscript.
Publisher Copyright:
© 2017 The Author
PY - 2018/3
Y1 - 2018/3
N2 - For several decades, auditory neuroscientists have used spectro-temporal encoding models to understand how neurons in the auditory system represent sound. Derived from early applications of systems identification tools to the auditory periphery, the spectro-temporal receptive field (STRF) and more sophisticated variants have emerged as an efficient means of characterizing representation throughout the auditory system. Most of these encoding models describe neurons as static sensory filters. However, auditory neural coding is not static. Sensory context, reflecting the acoustic environment, and behavioral context, reflecting the internal state of the listener, can both influence sound-evoked activity, particularly in central auditory areas. This review explores recent efforts to integrate context into spectro-temporal encoding models. It begins with a brief tutorial on the basics of estimating and interpreting STRFs. Then it describes three recent studies that have characterized contextual effects on STRFs, emerging over a range of timescales, from many minutes to tens of milliseconds. An important theme of this work is not simply that context influences auditory coding, but also that contextual effects span a large continuum of internal states. The added complexity of these context-dependent models introduces new experimental and theoretical challenges that must be addressed in order to be used effectively. Several new methodological advances promise to address these limitations and allow the development of more comprehensive context-dependent models in the future.
AB - For several decades, auditory neuroscientists have used spectro-temporal encoding models to understand how neurons in the auditory system represent sound. Derived from early applications of systems identification tools to the auditory periphery, the spectro-temporal receptive field (STRF) and more sophisticated variants have emerged as an efficient means of characterizing representation throughout the auditory system. Most of these encoding models describe neurons as static sensory filters. However, auditory neural coding is not static. Sensory context, reflecting the acoustic environment, and behavioral context, reflecting the internal state of the listener, can both influence sound-evoked activity, particularly in central auditory areas. This review explores recent efforts to integrate context into spectro-temporal encoding models. It begins with a brief tutorial on the basics of estimating and interpreting STRFs. Then it describes three recent studies that have characterized contextual effects on STRFs, emerging over a range of timescales, from many minutes to tens of milliseconds. An important theme of this work is not simply that context influences auditory coding, but also that contextual effects span a large continuum of internal states. The added complexity of these context-dependent models introduces new experimental and theoretical challenges that must be addressed in order to be used effectively. Several new methodological advances promise to address these limitations and allow the development of more comprehensive context-dependent models in the future.
UR - http://www.scopus.com/inward/record.url?scp=85040360894&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85040360894&partnerID=8YFLogxK
U2 - 10.1016/j.heares.2017.12.021
DO - 10.1016/j.heares.2017.12.021
M3 - Review article
C2 - 29331232
AN - SCOPUS:85040360894
SN - 0378-5955
VL - 360
SP - 107
EP - 123
JO - Hearing Research
JF - Hearing Research
ER -