Evading Data Contamination Detection for Language Models is (too) Easy
CoRR(2024)
Abstract
Large language models are widespread, with their performance on benchmarks
frequently guiding user preferences for one model over another. However, the
vast amount of data these models are trained on can inadvertently lead to
contamination with public benchmarks, thus compromising performance
measurements. While recently developed contamination detection methods try to
address this issue, they overlook the possibility of deliberate contamination
by malicious model providers aiming to evade detection. We argue that this
setting is of crucial importance as it casts doubt on the reliability of public
benchmarks. To more rigorously study this issue, we propose a categorization of
both model providers and contamination detection methods. This reveals
vulnerabilities in existing methods that we exploit with EAL, a simple yet
effective contamination technique that significantly inflates benchmark
performance while completely evading current detection methods.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined