Secure Transformer Inference Protocol
arxiv(2023)
摘要
Security of model parameters and user data is critical for Transformer-based
services, such as ChatGPT. While recent strides in secure two-party protocols
have successfully addressed security concerns in serving Transformer models,
their adoption is practically infeasible due to the prohibitive cryptographic
overheads involved. Drawing insights from our hands-on experience in developing
two real-world Transformer-based services, we identify the inherent efficiency
bottleneck in the two-party assumption. To overcome this limitation, we propose
a novel three-party threat model. Within this framework, we design a
semi-symmetric permutation-based protection scheme and present STIP, the first
secure Transformer inference protocol without any inference accuracy loss.
Experiments on representative Transformer models in real systems show that STIP
has practical security and outperforms state-of-the-art secure two-party
protocols in efficiency by millions of times.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要