Similarity enhances visual statistical learning

Alyssa P. Levy,Timothy J. Vickery

Journal of Vision(2023)

引用 0|浏览1
暂无评分
摘要
Visual statistical learning (VSL) is an example of incidental learning that reflects learning of temporal or spatial stimulus co-occurrence. When items appear sequentially and repeat in stereotyped sequences, people learn the sequences without being told to do so. Real-world stimuli over which VSL may take place typically have rich interrelationships, such as similarity or categorization, but this richness has been the subject of controls in most past VSL work. Prior work from our lab has found evidence that categorical relationships strongly impact VSL, but the role of similarity in VSL in the absence of category knowledge is still ambiguous. In the present study, we asked whether similarity of constituent items affects temporal VSL. Participants were shown creature stimuli composed of nine distinct features (e.g., head orientation), with each feature having two possible feature options (e.g., head facing up or head facing to the right). The specific discrete features allowed for systematic manipulation of similarity between paired items. Participants viewed stimuli one at a time, in a stream composed of temporally paired items that were either similar (six shared features) or dissimilar (three shared features). In a test phase, participants performed a two-alternative forced choice task (2AFC), choosing between a target pair previously presented, or a matched-similarity foil pair composed of previously presented items that had been recomposed. Across three experiments, similar pairs were recognized at a higher rate than dissimilar pairs. These results provide evidence of the impact of inter-item similarity on VSL and provide important constraints on models of VSL and the potential role of VSL in everyday cognition.
更多
查看译文
关键词
visual,learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要