Evaluating Differentially Private Generative Adversarial Networks Over Membership Inference Attack

IEEE ACCESS(2021)

引用 1|浏览3
暂无评分
摘要
As communication technology advances with 5G, the amount of data accumulated online is explosively increasing. From these data, valuable results are being created through data analysis technologies. Among them, artificial intelligence (AI) has shown remarkable performances in various fields and is emerging as an innovative technology. In particular, machine learning and deep learning models are evolving rapidly and are being widely deployed in practical applications. Meanwhile, behind the widespread use of these models, privacy concerns have been continuously raised. In addition, as substantial privacy invasion attacks against machine learning and deep learning models have been proposed, the importance of research on privacy-preserving AI is being emphasized. Accordingly, in the field of differential privacy, which has become a de facto standard for preserving privacy, various mechanisms have been proposed to preserve the privacy of AI models. However, it is unclear how to calibrate appropriate privacy parameters, taking into account the trade-off between a model's utility and data privacy. Moreover, there is a lack of research that analyzes the relationship between the degree of differential privacy guarantee and privacy invasion attacks. In this paper, we investigate the resistance of differentially private AI models to substantial privacy invasion attacks according to the degree of privacy guarantee, and analyze how privacy parameters should be set to prevent the attacks while preserving the utility of the models. Specifically, we focus on generative adversarial networks (GAN), which is one of the most sophisticated AI models, and on the membership inference attack, which is the most fundamental privacy invasion attack. In the experimental evaluation, by quantifying the effectiveness of the attack based on the degree of privacy guarantee, we show that differential privacy can simultaneously preserve data privacy and the utility of models with moderate privacy budgets.
更多
查看译文
关键词
Privacy, Generative adversarial networks, Data models, Differential privacy, Analytical models, Hidden Markov models, Training, Differential privacy, artificial intelligence, deep learning, generative adversarial networks, privacy-preserving deep learning, membership inference attack
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要