谷歌浏览器插件
订阅小程序
在清言上使用

Visual and Semantic Representations Predict Subsequent Memory in Perceptual and Conceptual Memory Tests

CEREBRAL CORTEX(2021)

引用 15|浏览14
暂无评分
摘要
It is generally assumed that the encoding of a single event generates multiple memory representations, which contribute differently to subsequent episodic memory. We used functional magnetic resonance imaging (fMRI) and representational similarity analysis to examine how visual and semantic representations predicted subsequent memory for single item encoding (e.g., seeing an orange). Three levels of visual representations corresponding to early, middle, and late visual processing stages were based on a deep neural network. Three levels of semantic representations were based on normative observed ("is round"), taxonomic ("is a fruit"), and encyclopedic features ("is sweet"). We identified brain regions where each representation type predicted later perceptual memory, conceptual memory, or both (general memory). Participants encoded objects during fMRI, and then completed both a word-based conceptual and picture-based perceptual memory test. Visual representations predicted subsequent perceptual memory in visual cortices, but also facilitated conceptual and general memory in more anterior regions. Semantic representations, in turn, predicted perceptual memory in visual cortex, conceptual memory in the perirhinal and inferior prefrontal cortex, and general memory in the angular gyrus. These results suggest that the contribution of visual and semantic representations to subsequent memory effects depends on a complex interaction between representation, test type, and storage location.
更多
查看译文
关键词
DNNs, episodic memory, object representation, representational similarity analysis, semantic memory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要