Latent Diffusion Models for Attribute-Preserving Image Anonymization
CoRR(2024)
摘要
Generative techniques for image anonymization have great potential to
generate datasets that protect the privacy of those depicted in the images,
while achieving high data fidelity and utility. Existing methods have focused
extensively on preserving facial attributes, but failed to embrace a more
comprehensive perspective that considers the scene and background into the
anonymization process. This paper presents, to the best of our knowledge, the
first approach to image anonymization based on Latent Diffusion Models (LDMs).
Every element of a scene is maintained to convey the same meaning, yet
manipulated in a way that makes re-identification difficult. We propose two
LDMs for this purpose: CAMOUFLaGE-Base exploits a combination of pre-trained
ControlNets, and a new controlling mechanism designed to increase the distance
between the real and anonymized images. CAMOFULaGE-Light is based on the
Adapter technique, coupled with an encoding designed to efficiently represent
the attributes of different persons in a scene. The former solution achieves
superior performance on most metrics and benchmarks, while the latter cuts the
inference time in half at the cost of fine-tuning a lightweight module. We show
through extensive experimental comparison that the proposed method is
competitive with the state-of-the-art concerning identity obfuscation whilst
better preserving the original content of the image and tackling unresolved
challenges that current solutions fail to address.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要