Logits of API-Protected LLMs Leak Proprietary Information
arxiv(2024)
摘要
The commercialization of large language models (LLMs) has led to the common
practice of high-level API-only access to proprietary models. In this work, we
show that even with a conservative assumption about the model architecture, it
is possible to learn a surprisingly large amount of non-public information
about an API-protected LLM from a relatively small number of API queries (e.g.,
costing under $1,000 for OpenAI's gpt-3.5-turbo). Our findings are centered on
one key observation: most modern LLMs suffer from a softmax bottleneck, which
restricts the model outputs to a linear subspace of the full output space. We
show that this lends itself to a model image or a model signature which unlocks
several capabilities with affordable cost: efficiently discovering the LLM's
hidden size, obtaining full-vocabulary outputs, detecting and disambiguating
different model updates, identifying the source LLM given a single full LLM
output, and even estimating the output layer parameters. Our empirical
investigations show the effectiveness of our methods, which allow us to
estimate the embedding size of OpenAI's gpt-3.5-turbo to be about 4,096.
Lastly, we discuss ways that LLM providers can guard against these attacks, as
well as how these capabilities can be viewed as a feature (rather than a bug)
by allowing for greater transparency and accountability.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要