Let Real Images be as a Judger, Spotting Fake Images Synthesized with Generative Models
CoRR(2024)
Abstract
In the last few years, generative models have shown their powerful
capabilities in synthesizing realistic images in both quality and diversity
(i.e., facial images, and natural subjects). Unfortunately, the artifact
patterns in fake images synthesized by different generative models are
inconsistent, leading to the failure of previous research that relied on
spotting subtle differences between real and fake. In our preliminary
experiments, we find that the artifacts in fake images always change with the
development of the generative model, while natural images exhibit stable
statistical properties. In this paper, we employ natural traces shared only by
real images as an additional predictive target in the detector. Specifically,
the natural traces are learned from the wild real images and we introduce
extended supervised contrastive learning to bring them closer to real images
and further away from fake ones. This motivates the detector to make decisions
based on the proximity of images to the natural traces. To conduct a
comprehensive experiment, we built a high-quality and diverse dataset that
includes generative models comprising 6 GAN and 6 diffusion models, to evaluate
the effectiveness in generalizing unknown forgery techniques and robustness in
surviving different transformations. Experimental results show that our
proposed method gives 96.1
Extensive experiments conducted on the widely recognized platform Midjourney
reveal that our proposed method achieves an accuracy exceeding 78.4
underscoring its practicality for real-world application deployment. The source
code and partial self-built dataset are available in supplementary material.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined