Perceptual priors add sensory detail to contextual feedback processing in V1

bioRxiv (Cold Spring Harbor Laboratory)(2023)

引用 0|浏览0
暂无评分
摘要
How do we develop models of the world? Contextualising ambiguous information with previous experience allows us to form an enriched perception. Contextual information and prior knowledge facilitate perceptual processing, improving our recognition of even distorted or obstructed visual inputs. As a result, neuronal processing elicited by identical sensory inputs varies depending on the context in which we encounter those inputs. This modulation is in line with predictive processing accounts of vision which suggest that the brain uses internal models of the world to predict sensory inputs, with cortical feedback processing in sensory areas encoding beliefs about those inputs. As such, acquiring knowledge should enhance our internal models that we use to resolve sensory ambiguities, and feedback signals should encode more accurate estimates of sensory inputs. We used partially occluded Mooney images, ambiguous two-tone images which are difficult to recognise without prior knowledge of the image content, in behavioural and 3T fMRI experiments to measure if contextual feedback signals in early visual areas are modulated by learning. We show that perceptual priors add sensory detail to contextual feedback processing in early visual areas in response to subsequent presentations of previously ambiguous images. ### Competing Interest Statement The authors have declared no competing interest.
更多
查看译文
关键词
perceptual priors,sensory detail,contextual feedback processing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要