ConfusionPrompt: Practical Private Inference for Online Large Language Models
arxiv(2023)
摘要
State-of-the-art large language models (LLMs) are commonly deployed as online
services, necessitating users to transmit informative prompts to cloud servers,
thus engendering substantial privacy concerns. In response, we present
ConfusionPrompt, a novel private LLM inference framework designed to obfuscate
the server by: (i) decomposing the prompt into sub-prompts, and (ii) generating
pseudo prompts along with the genuine sub-prompts as input to the online LLM.
Eventually, the returned responses can be recomposed by the user to obtain the
final whole response. Such designs endows our framework with advantages over
previous protocols that (i) it can be seamlessly integrated with existing
black-box LLMs, and (ii) it achieves significantly better privacy-utility
trade-off than existing text perturbation-based methods. We develop a
(λ, μ, ρ)-privacy model to formulate the requirement for a
privacy-preserving group of prompts, and provide a complexity analysis,
affirming ConfusionPrompt's efficiency. Our empirical evaluation reveals that
our method offers significantly higher utility compared to local inference
methods using open-source models and perturbation-based techniques, while also
requiring much less memory than open-source LLMs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要