FairProof : Confidential and Certifiable Fairness for Neural Networks
CoRR(2024)
摘要
Machine learning models are increasingly used in societal applications, yet
legal and privacy concerns demand that they very often be kept confidential.
Consequently, there is a growing distrust about the fairness properties of
these models in the minds of consumers, who are often at the receiving end of
model predictions. To this end, we propose FairProof - a system that uses
Zero-Knowledge Proofs (a cryptographic primitive) to publicly verify the
fairness of a model, while maintaining confidentiality. We also propose a
fairness certification algorithm for fully-connected neural networks which is
befitting to ZKPs and is used in this system. We implement FairProof in Gnark
and demonstrate empirically that our system is practically feasible.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要