Ranking Enhanced Fine-Grained Contrastive Learning for Recommendation

ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2024)

Cited 0|Views6
No score
Abstract
Contrastive learning (CL) has been widely used to improve recommendation performance, since its self-supervised signals can effectively alleviate the data sparsity issue in recommender systems. Nevertheless, most existing CL-based recommendation models construct negative sample pairs following the common practice, which may include many false negatives (i.e., highly similar nodes). Some studies detect potential false negatives through specific rules and either remove these samples from the negative samples or treat them all as positive samples. However, they lack a finer-grained consideration of the samples. To address this limitation, we propose a Ranking Enhanced Fine-grained Contrastive Learning method (REFCL). Specifically, we devise two sampling strategies to generate two sets of positive samples with different confidence levels. These well-ordered positive sample sets are then integrated into our novel contrastive loss, which allows for a more nuanced consideration of distinctions between samples and thereby enhancing the discriminative qualities of the learned user/item representations. Extensive experiments conducted on three real-world datasets demonstrate the effectiveness and generality of REFCL.
More
Translated text
Key words
Self-Supervised Learning,Recommender System,Contrastive Learning,Sampling Strategy
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined