Confidential Inference in Decision Trees: FPGA Design and Implementation

2022 IFIP/IEEE 30th International Conference on Very Large Scale Integration (VLSI-SoC)(2022)

引用 1|浏览8
暂无评分
摘要
In confidential computing, algorithms operate on encrypted inputs to produce encrypted outputs. Specifically, in confidential inference, Alice has the parameters of the machine-learning model but does not want to reveal them to Bob who has the data. Bob wants to use Alice’s model for inference but does not want to reveal his data. Alice and Bob agree to use homomorphic encryption for running the inference engine in full confidence without revealing either model or data. They find that full homomorphic encryption is very time consuming and very challenging to accelerate on hardware. In this particular case, homomorphic encryption can be made computationally efficient and can even be readily accelerated on hardware. In this paper, we reveal how Alice and Bob run the inference engine in full confidence and show an FPGA implementation of the specialized homomorphic computing algorithm they used. We further evaluate the resources needed to implement the encrypted decision tree and compare them with those of a plain decision tree. Confidential inference tests are run on the encrypted FPGA design using the MNIST dataset.
更多
查看译文
关键词
plain decision tree,confidential inference tests,encrypted FPGA design,decision trees,confidential computing,encrypted inputs,encrypted outputs,machine-learning model,Alice's model,homomorphic encryption,inference engine,FPGA implementation,specialized homomorphic computing algorithm,encrypted decision tree
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要