Incremental Accumulation of Linguistic Context in Artificial and Biological Neural Networks

biorxiv(2024)

Cited 0|Views13
No score
Abstract
Accumulated evidence suggests that Large Language Models (LLMs) are beneficial in predicting neural signals related to narrative processing. The way LLMs integrate context over large timescales, however, is fundamentally different from the way the brain does it. In this study, we show that unlike LLMs that apply parallel processing of large contextual windows, the incoming context to the brain is limited to short windows of a few tens of words. We hypothesize that whereas lower-level brain areas process short contextual windows, higher-order areas in the default-mode network (DMN) engage in an online incremental mechanism where the incoming short context is summarized and integrated with information accumulated across long timescales. Consequently, we introduce a novel LLM that instead of processing the entire context at once, it incrementally generates a concise summary of previous information. As predicted, we found that neural activities at the DMN were better predicted by the incremental model, and conversely, lower-level areas were better predicted with short-context-window LLM. ### Competing Interest Statement The authors have declared no competing interest.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined