Interaction of bottom-up and top-down neural mechanisms in spatial multi-talker speech perception

Current Biology(2022)

引用 3|浏览13
暂无评分
摘要
How the human auditory cortex represents spatially separated simultaneous talkers and how talkers’ locations and voices modulate the neural representations of attended and unattended speech are unclear. Here, we measured the neural responses from electrodes implanted in neurosurgical patients as they performed single-talker and multi-talker speech perception tasks. We found that spatial separation between talkers caused a preferential encoding of the contralateral speech in Heschl’s gyrus (HG), planum temporale (PT), and superior temporal gyrus (STG). Location and spectrotemporal features were encoded in different aspects of the neural response. Specifically, the talker’s location changed the mean response level, whereas the talker’s spectrotemporal features altered the variation of response around response's baseline. These components were differentially modulated by the attended talker’s voice or location, which improved the population decoding of attended speech features. Attentional modulation due to the talker's voice only appeared in the auditory areas with longer latencies, but attentional modulation due to location was present throughout. Our results show that spatial multi-talker speech perception relies upon a separable pre-attentive neural representation, which could be further tuned by top-down attention to the location and voice of the talker.
更多
查看译文
关键词
sound localization in humans,spatial multi-talker speech perception,auditory cortex,attention
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要