SASSL: Enhancing Self-Supervised Learning via Neural Style Transfer
CoRR(2023)
摘要
Self-supervised learning relies heavily on data augmentation to extract
meaningful representations from unlabeled images. While existing
state-of-the-art augmentation pipelines incorporate a wide range of primitive
transformations, these often disregard natural image structure. Thus, augmented
samples can exhibit degraded semantic information and low stylistic diversity,
affecting downstream performance of self-supervised representations. To
overcome this, we propose SASSL: Style Augmentations for Self Supervised
Learning, a novel augmentation technique based on Neural Style Transfer. The
method decouples semantic and stylistic attributes in images and applies
transformations exclusively to the style while preserving content, generating
diverse augmented samples that better retain their semantic properties.
Experimental results show our technique achieves a top-1 classification
performance improvement of more than 2% on ImageNet compared to the
well-established MoCo v2. We also measure transfer learning performance across
five diverse datasets, observing significant improvements of up to 3.75%. Our
experiments indicate that decoupling style from content information and
transferring style across datasets to diversify augmentations can significantly
improve downstream performance of self-supervised representations.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要