Gradient Self-alignment in Private Deep Learning

MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023 WORKSHOPS(2023)

引用 0|浏览0
暂无评分
摘要
Differential Privacy (DP) has become a gold-standard to preserve privacy in deep learning. Intuitively speaking, DP ensures that the output of a model is approximately invariant to the inclusion or exclusion of a single individual's data from the training set. There is, however, a trade-off between privacy and utility. DP models tend to perform worse than non-DP models trained on the same data. This is caused by the clipping of per-sample gradients and the addition of noise required for DP guarantees causing an obfuscation of the individual data point's contribution. In this work, we propose a method to reduce this discrepancy by improving the alignment between the per-sample gradients of each individual training sample with its non-DP gradient by increasing their cosine similarity. Optimizing the alignment in only a relevant subset of gradient dimensions, further improves the performance. We evaluate our method on CIFAR-10 and a pediatric pneumonia chest x-ray dataset.
更多
查看译文
关键词
Differential Privacy,Private learning,Gradient alignment
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要