Auditing and Generating Synthetic Data with Controllable Trust Trade-offs
CoRR(2023)
摘要
Real-world data often exhibits bias, imbalance, and privacy risks. Synthetic
datasets have emerged to address these issues. This paradigm relies on
generative AI models to generate unbiased, privacy-preserving data while
maintaining fidelity to the original data. However, assessing the
trustworthiness of synthetic datasets and models is a critical challenge. We
introduce a holistic auditing framework that comprehensively evaluates
synthetic datasets and AI models. It focuses on preventing bias and
discrimination, ensures fidelity to the source data, assesses utility,
robustness, and privacy preservation. We demonstrate the framework's
effectiveness by auditing various generative models across diverse use cases
like education, healthcare, banking, and human resources, spanning different
data modalities such as tabular, time-series, vision, and natural language.
This holistic assessment is essential for compliance with regulatory
safeguards. We introduce a trustworthiness index to rank synthetic datasets
based on their safeguards trade-offs. Furthermore, we present a
trustworthiness-driven model selection and cross-validation process during
training, exemplified with "TrustFormers" across various data types. This
approach allows for controllable trustworthiness trade-offs in synthetic data
creation. Our auditing framework fosters collaboration among stakeholders,
including data scientists, governance experts, internal reviewers, external
certifiers, and regulators. This transparent reporting should become a standard
practice to prevent bias, discrimination, and privacy violations, ensuring
compliance with policies and providing accountability, safety, and performance
guarantees.
更多查看译文
关键词
controllable trust
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要