Building up visual memories from sensory evidence

Journal of Vision(2023)

引用 0|浏览3
暂无评分
摘要
Visual short-term memory is a fundamental memory structure that supports the online maintenance of visual information in the service of goal-directed action. While much work examines the nature of visual memories, relatively little work investigates how people use their underlying, complex sensory information to build memory representations and make memory-based decisions. We examined this question by comparing two variants of a signal detection model. Both models start with an assumed high-dimensional set of sensory evidence, and they differ in how this evidence is integrated to make memory decisions. Through the lens of the first signal detection model, people pool sensory evidence via summation or averaging of sensory signals; according to the alternative signal detection model people take the maximum of the distribution of sensory signals. These two models of how sensory signals are integrated naturally result in different distributions of evidence, a Gaussian versus a Gumbel distribution over sensory evidence, respectively. To distinguish them, we compared these models on their ability to jointly fit data across a different number of alternatives in a multiple alternative forced-choice visual memory paradigm. Across two experiments, we found evidence that people pool sensory evidence via averaging or summing to make memory-based decisions. Furthermore, our findings suggest that this pooling process is robust; it is used both when people remember simple features (color) that are presented simultaneously (p<.001; dz = .77), and when they remember complex real-world objects that are presented sequentially (p<.001; dz = .81). This work elucidates the processes that link perception and memory, and opens venues for establishing novel linking propositions between neural and cognitive models of visual memory.
更多
查看译文
关键词
visual memories,evidence
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要