Improved Categorical Cross-Entropy Loss for Training Deep Neural Networks with Noisy Labels.

PRCV(2021)

Cited 2|Views5
No score
Abstract
Deep neural networks (DNNs) have achieved impressive success in a variety of classification tasks. However, the presence of noisy labels in training dataset adversely affects the performance of DNNs. Recently, numerous noise-robust loss functions have been proposed to combat the noisy label problem. However, we find that these loss functions are either slow to learn the potential data pattern or not sufficiently robust against noisy labels. Here, we propose an improved categorical cross entropy (ICCE) to deal with this challenge. The ICCE can automatically adjust the weighting scheme based on the predicted probability distribution of DNNs by an exponential item, which makes it gain strong noise robustness and fast learning ability. A theoretical analysis of the ICCE is presented in the context of noisy labels. Experiments on datasets indicate that the ICCE can better improve the performance of DNNs even under high-level noise.
More
Translated text
Key words
DNNs,Noisy labels,Noise robustness
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined