Certified private data release for sparse Lipschitz functions

Konstantin Donhauser, Johan Lokna,Amartya Sanyal, March Boedihardjo, Robert Hönig,Fanny Yang

arXiv (Cornell University)(2023)

引用 0|浏览0
暂无评分
摘要
As machine learning has become more relevant for everyday applications, a natural requirement is the protection of the privacy of the training data. When the relevant learning questions are unknown in advance, or hyper-parameter tuning plays a central role, one solution is to release a differentially private synthetic data set that leads to similar conclusions as the original training data. In this work, we introduce an algorithm that enjoys fast rates for the utility loss for sparse Lipschitz queries. Furthermore, we show how to obtain a certificate for the utility loss for a large class of algorithms.
更多
查看译文
关键词
private data release,sparse lipschitz functions
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要