How Does Contrastive Learning Organize Images?

IEEE/CVF Winter Conference on Applications of Computer Vision(2024)

Cited 0|Views3
No score
Abstract
Contrastive learning, a dominant self-supervised technique, emphasizes similarity in representations between augmentations of the same input and dissimilarity for different ones. Although low contrastive loss often correlates with high classification accuracy, recent studies challenge this direct relationship, spotlighting the crucial role of inductive biases. We delve into these biases from a clustering viewpoint, noting that contrastive learning creates locally dense clusters, contrasting the globally dense clusters from supervised learning. To capture this discrepancy, we introduce the “RLD (Relative Local Density)” metric. While this cluster property can hinder linear classification accuracy, leveraging a Graph Convolutional Network (GCN) based classifier mitigates this, boosting accuracy and reducing parameter requirements. The code is available here.
More
Translated text
Key words
Self-supervised Learning,Supervised Learning,Relative Density,Dense Clusters,Linear Classifier,Graph Convolutional Network,Contrastive Loss,Graph Convolution,Role Of Bias,Inductive Bias,K-nearest Neighbor,Modularity,Class Labels,Conditional Independence,Image Pairs,Output Feature,Similarity Matrix,Global Structure,Real-world Scenarios,Representation Of Space,Visual Similarity,Supervised Learning Methods,Node Features,Global Density,Similar Space,Graph Convolutional Network Model,Euclidean Space,Ground Truth Labels,Learnable Weight Matrix,View Of Distribution
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined