Quantifying Association Capabilities of Large Language Models and Its Implications on Privacy Leakage
Conference of the European Chapter of the Association for Computational Linguistics(2023)
摘要
The advancement of large language models (LLMs) brings notable improvements
across various applications, while simultaneously raising concerns about
potential private data exposure. One notable capability of LLMs is their
ability to form associations between different pieces of information, but this
raises concerns when it comes to personally identifiable information (PII).
This paper delves into the association capabilities of language models, aiming
to uncover the factors that influence their proficiency in associating
information. Our study reveals that as models scale up, their capacity to
associate entities/information intensifies, particularly when target pairs
demonstrate shorter co-occurrence distances or higher co-occurrence
frequencies. However, there is a distinct performance gap when associating
commonsense knowledge versus PII, with the latter showing lower accuracy.
Despite the proportion of accurately predicted PII being relatively small, LLMs
still demonstrate the capability to predict specific instances of email
addresses and phone numbers when provided with appropriate prompts. These
findings underscore the potential risk to PII confidentiality posed by the
evolving capabilities of LLMs, especially as they continue to expand in scale
and power.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要