Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels
Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels
Zhilu Zhang Mert R. Sabuncu

Abstract
Deep neural networks (DNNs) have achieved tremendous success in a variety of applications across many disciplines. Yet, their superior performance comes with the expensive cost of requiring correctly annotated large-scale datasets. Moreover, due to DNNs' rich capacity, errors in training labels can hamper performance. To combat this problem, mean absolute error (MAE) has recently been proposed as a noise-robust alternative to the commonly-used categorical cross entropy (CCE) loss. However, as we show in this paper, MAE can perform poorly with DNNs and challenging datasets. Here, we present a theoretically grounded set of noise-robust loss functions that can be seen as a generalization of MAE and CCE. Proposed loss functions can be readily applied with any existing DNN architecture and algorithm, while yielding good performance in a wide range of noisy label scenarios. We report results from experiments conducted with CIFAR-10, CIFAR-100 and FASHION-MNIST datasets and synthetically generated noisy labels.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| image-classification-on-clothing1m | GCE | Accuracy: 69.75% |
| learning-with-noisy-labels-on-cifar-100n | GCE | Accuracy (mean): 56.73 |
| learning-with-noisy-labels-on-cifar-10n | GCE | Accuracy (mean): 87.85 |
| learning-with-noisy-labels-on-cifar-10n-1 | GCE | Accuracy (mean): 87.61 |
| learning-with-noisy-labels-on-cifar-10n-2 | GCE | Accuracy (mean): 87.70 |
| learning-with-noisy-labels-on-cifar-10n-3 | GCE | Accuracy (mean): 87.58 |
| learning-with-noisy-labels-on-cifar-10n-worst | GCE | Accuracy (mean): 80.66 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.