Does Differentially Private Synthetic Data Lead to Synthetic Discoveries?
CoRR(2024)
Abstract
Background: Synthetic data has been proposed as a solution for sharing
anonymized versions of sensitive biomedical datasets. Ideally, synthetic data
should preserve the structure and statistical properties of the original data,
while protecting the privacy of the individual subjects. Differential privacy
(DP) is currently considered the gold standard approach for balancing this
trade-off.
Objectives: The aim of this study is to evaluate the Mann-Whitney U test on
DP-synthetic biomedical data in terms of Type I and Type II errors, in order to
establish whether statistical hypothesis testing performed on privacy
preserving synthetic data is likely to lead to loss of test's validity or
decreased power.
Methods: We evaluate the Mann-Whitney U test on DP-synthetic data generated
from real-world data, including a prostate cancer dataset (n=500) and a
cardiovascular dataset (n=70 000), as well as on data drawn from two Gaussian
distributions. Five different DP-synthetic data generation methods are
evaluated, including two basic DP histogram release methods and MWEM,
Private-PGM, and DP GAN algorithms.
Conclusion: Most of the tested DP-synthetic data generation methods showed
inflated Type I error, especially at privacy budget levels of ϵ≤ 1.
This result calls for caution when releasing and analyzing DP-synthetic data:
low p-values may be obtained in statistical tests simply as a byproduct of the
noise added to protect privacy. A DP smoothed histogram-based synthetic data
generation method was shown to produce valid Type I error for all privacy
levels tested but required a large original dataset size and a modest privacy
budget (ϵ≥ 5) in order to have reasonable Type II error levels.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined