Artwork Protection Against Neural Style Transfer Using Locally Adaptive Adversarial Color Attack
arxiv(2024)
摘要
Neural style transfer (NST) generates new images by combining the style of
one image with the content of another. However, unauthorized NST can exploit
artwork, raising concerns about artists' rights and motivating the development
of proactive protection methods. We propose Locally Adaptive Adversarial Color
Attack (LAACA), empowering artists to protect their artwork from unauthorized
style transfer by processing before public release. By delving into the
intricacies of human visual perception and the role of different frequency
components, our method strategically introduces frequency-adaptive
perturbations in the image. These perturbations significantly degrade the
generation quality of NST while maintaining an acceptable level of visual
change in the original image, ensuring that potential infringers are
discouraged from using the protected artworks, because of its bad NST
generation quality. Additionally, existing metrics often overlook the
importance of color fidelity in evaluating color-mattered tasks, such as the
quality of NST-generated images, which is crucial in the context of artistic
works. To comprehensively assess the color-mattered tasks, we propose the
Adversarial Color Distance Metric (ACDM), designed to quantify the color
difference of images pre- and post-manipulations. Experimental results confirm
that attacking NST using LAACA results in visually inferior style transfer, and
the ACDM can efficiently measure color-mattered tasks. By providing artists
with a tool to safeguard their intellectual property, our work relieves the
socio-technical challenges posed by the misuse of NST in the art community.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要