Beyond Memorization: The Challenge of Random Memory Access in Language Models
arxiv(2024)
摘要
Recent developments in Language Models (LMs) have shown their effectiveness
in NLP tasks, particularly in knowledge-intensive tasks. However, the
mechanisms underlying knowledge storage and memory access within their
parameters remain elusive. In this paper, we investigate whether a generative
LM (e.g., GPT-2) is able to access its memory sequentially or randomly. Through
carefully-designed synthetic tasks, covering the scenarios of full recitation,
selective recitation and grounded question answering, we reveal that LMs manage
to sequentially access their memory while encountering challenges in randomly
accessing memorized content. We find that techniques including recitation and
permutation improve the random memory access capability of LMs. Furthermore, by
applying this intervention to realistic scenarios of open-domain question
answering, we validate that enhancing random access by recitation leads to
notable improvements in question answering. The code to reproduce our
experiments can be found at https://github.
com/sail-sg/lm-random-memory-access.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要