Securegbm: Secure Multi-Party Gradient Boosting

2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA)(2019)

引用 40|浏览3
暂无评分
摘要
Federated machine learning systems have been widely used to facilitate the joint data analytics across the distributed datasets owned by the different parties that do not trust each others. In this paper, we proposed a novel Gradient Boosting Machines (GBM) framework SecureGBM built-up with a multi-party computation model based on semi-homomorphic encryption, where every involved party can jointly obtain a shared Gradient Boosting machines model while protecting their own data from the potential privacy leakage and inferential identification. More specific, our work focused on a specific "dual-party" secure learning scenario based on two parties - both party own an unique view (i.e., attributes or features) to the sample group of samples while only one party owns the labels. In such scenario, feature and label data are not allowed to share with others.To achieve the above goal, we firstly extent - LightGBM - a well known implementation of tree-based GBM through covering its key operations for training and inference with SEAL homomorphic encryption schemes. However, the performance of such re-implementation is significantly bottle-necked by the explosive inflation of the communication payloads, based on ciphertexts subject to the increasing length of plaintexts. In this way, we then proposed to use stochastic approximation techniques to reduced the communication payloads while accelerating the overall training procedure in a statistical manner. Our experiments using the real-world data showed that SecureGBM can well secure the communication and computation of LightGBM training and inference procedures for the both parties while only losing less than 3% AUC, using the same number of iterations for gradient boosting, on a wide range of benchmark datasets. More specific, compared to LightGBM, the proposed SecureGBM would slowdown with 3x similar to 64x time consumption per iteration in the training procedure, while SecureGBM becomes more and more efficient when the scale of the training dataset increases (i.e., the larger training set, the lower slowdown ratio).
更多
查看译文
关键词
secure multiparty Gradient Boosting,federated machine learning systems,joint data analytics,multiparty computation model,semihomomorphic encryption,involved party,shared Gradient Boosting machines model,learning scenario,label data,tree-based GBM,SEAL homomorphic encryption schemes,communication payloads,training procedure,real-world data,inference procedures,training dataset increases,gradient boosting machines framework
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要