Universal Fingerprint Generation: Controllable Diffusion Model with Multimodal Conditions
arxiv(2024)
摘要
The utilization of synthetic data for fingerprint recognition has garnered
increased attention due to its potential to alleviate privacy concerns
surrounding sensitive biometric data. However, current methods for generating
fingerprints have limitations in creating impressions of the same finger with
useful intra-class variations. To tackle this challenge, we present GenPrint, a
framework to produce fingerprint images of various types while maintaining
identity and offering humanly understandable control over different appearance
factors such as fingerprint class, acquisition type, sensor device, and quality
level. Unlike previous fingerprint generation approaches, GenPrint is not
confined to replicating style characteristics from the training dataset alone:
it enables the generation of novel styles from unseen devices without requiring
additional fine-tuning. To accomplish these objectives, we developed GenPrint
using latent diffusion models with multimodal conditions (text and image) for
consistent generation of style and identity. Our experiments leverage a variety
of publicly available datasets for training and evaluation. Results demonstrate
the benefits of GenPrint in terms of identity preservation, explainable
control, and universality of generated images. Importantly, the
GenPrint-generated images yield comparable or even superior accuracy to models
trained solely on real data and further enhances performance when augmenting
the diversity of existing real fingerprint datasets.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要