Label Distribution Learning on Imbalanced Data

2020 IEEE 5th International Conference on Cloud Computing and Big Data Analytics (ICCCBDA)(2020)

Cited 1|Views21
No score
Abstract
Although multi-label learning can handle problems involving label ambiguity, it is not suitable for applications where the overall distribution of the importance of the labels matters. Imbalanced data also poses challenges to classification applications. In this paper, we propose a novel learning paradigm named Label Distribution Learning on Imbalanced Data (LDLID) for such applications, which is based on the improved Kullback-Leibler Divergence and uses the adaptive step size to update the gradient. The experimental results show that LDLID achieves better performance in comparison with four widely-used algorithms on four imbalanced datasets.
More
Translated text
Key words
Imbalanced data,Multi-label classification,Kullback-Leibler divergence,Label distribution
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined