Towards Practical Privacy-Preserving Solution for Outsourced Neural Network Inference

2022 IEEE 15th International Conference on Cloud Computing (CLOUD)(2022)

引用 1|浏览12
暂无评分
摘要
When neural network model and data are outsourced to a cloud server for inference, it is desired to preserve the privacy of the model/data as the involved parties (i.e., cloud server, and model/data providing clients) may not trust mutually. Solutions have been proposed based on multi-party computation, trusted execution environment (TEE) and leveled or fully homomorphic encryption (LHE or FHE), but they all have limitations that hamper practical application. We propose a new framework based on integration of LHE and TEE, which enables collaboration among mutually-untrusted three parties, while minimizing the involvement of resource-constrained TEE but fully utilizing the untrusted but resource-rich part of server. We also propose a generic and efficient LHE-based inference scheme, along with optimizations, as an important performance-determining component of the framework. We implemented and evaluated the proposed scheme on a moderate platform, and the evaluations show that, our proposed system is applicable and scalable to various settings, and it has better or comparable performance when compared with the state-of-the-art solutions which are more restrictive in applicability and scalability.
更多
查看译文
关键词
Outsourcing,Privacy,Neural Networks,Homo-morphic Encryption,Trusted Execution Environment
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要