Revisiting Dynamic Evaluation: Online Adaptation for Large Language Models
arxiv(2024)
摘要
We consider the problem of online fine tuning the parameters of a language
model at test time, also known as dynamic evaluation. While it is generally
known that this approach improves the overall predictive performance,
especially when considering distributional shift between training and
evaluation data, we here emphasize the perspective that online adaptation turns
parameters into temporally changing states and provides a form of
context-length extension with memory in weights, more in line with the concept
of memory in neuroscience. We pay particular attention to the speed of
adaptation (in terms of sample efficiency),sensitivity to the overall
distributional drift, and the computational overhead for performing gradient
computations and parameter updates. Our empirical study provides insights on
when online adaptation is particularly interesting. We highlight that with
online adaptation the conceptual distinction between in-context learning and
fine tuning blurs: both are methods to condition the model on previously
observed tokens.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要