Family lexicon: using language models to encode memories of personally familiar and famous people and places in the brain

biorxiv(2023)

引用 0|浏览1
暂无评分
摘要
Knowledge about personally familiar people and places is extremely rich and varied, involving pieces of semantic information connected in unpredictable ways through past autobiographical memories. In this work we investigate whether we can capture brain processing of personally familiar people and places using subject-specific memories, after transforming them into vectorial semantic representations using language models. First we asked participants to provide us with the names of the closest people and places in their lives. Then we collected open-ended answers to a questionnaire, aimed at capturing various facets of declarative knowledge. We collected EEG data from the same participants while they were reading the names and subsequently mentally visualizing their referents. As a control set of stimuli, we also recorded evoked responses to a matched set of famous people and places. We then created original semantic representations for the individual entities using language models. For personally familiar entities, we used the text of the answers to the questionnaire. For famous entities, we employed their Wikipedia page, which reflects shared declarative knowledge about them. Through whole-scalp time-resolved and searchlight encoding analyses we found that we could capture how the brain processes one’s closest people and places using person-specific answers to questionnaires, as well as famous entities. Encoding performance was significant in a large time window (200-800ms). In terms of spatio-temporal clusters, two main axes where encoding scores are significant emerged, in bilateral temporo-parietal electrodes first (200-500ms) and frontal and posterior central electrodes later (500-700ms). We also found that XLM, a contextualized language model or large language model, provided superior encoding scores when compared with a simpler static language model as word2vec. Overall, these results indicate that language models can capture subject-specific semantic representations as they are processed in the human brain, by exploiting small-scale distributional lexical data. > My parents had five children. We now live in different cities, some of us in foreign countries, and we don’t write to each other often. When we do meet up we can be indifferent or distracted. But for us it takes just one word. It takes one word, one sentence, one of the old ones from our childhood, heard and repeated countless times. All it takes is for one of us to say “We haven’t come to Bergamo on a military campaign,” or “Sulfuric acid stinks of fart,” and we immediately fall back into our old relationships, our childhood, our youth, all inextricably linked to those words and phrases . > > excerpt from Family Lexicon, by Natalia Ginzburg ### Competing Interest Statement The authors have declared no competing interest.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要