MEMORYLLM: Towards Self-Updatable Large Language Models
CoRR(2024)
摘要
Existing Large Language Models (LLMs) usually remain static after deployment,
which might make it hard to inject new knowledge into the model. We aim to
build models containing a considerable portion of self-updatable parameters,
enabling the model to integrate new knowledge effectively and efficiently. To
this end, we introduce MEMORYLLM, a model that comprises a transformer and a
fixed-size memory pool within the latent space of the transformer. MEMORYLLM
can self-update with text knowledge and memorize the knowledge injected
earlier. Our evaluations demonstrate the ability of MEMORYLLM to effectively
incorporate new knowledge, as evidenced by its performance on model editing
benchmarks. Meanwhile, the model exhibits long-term information retention
capacity, which is validated through our custom-designed evaluations and
long-context benchmarks. MEMORYLLM also shows operational integrity without any
sign of performance degradation even after nearly a million memory updates.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要