Language Models Know the Value of Numbers
CoRR(2024)
Abstract
Large language models (LLMs) have exhibited impressive competence in various
tasks, but their internal mechanisms on mathematical problems are still
under-explored. In this paper, we study a fundamental question: whether
language models know the value of numbers, a basic element in math. To study
the question, we construct a synthetic dataset comprising addition problems and
utilize linear probes to read out input numbers from the hidden states.
Experimental results support the existence of encoded number values in LLMs on
different layers, and these values can be extracted via linear probes. Further
experiments show that LLMs store their calculation results in a similar manner,
and we can intervene the output via simple vector additions, proving the causal
connection between encoded numbers and language model outputs. Our research
provides evidence that LLMs know the value of numbers, thus offering insights
for better exploring, designing, and utilizing numeric information in LLMs.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined