How does Disagreement Help Generalization against Label Corruption?
How does Disagreement Help Generalization against Label Corruption?
Xingrui Yu Bo Han Jiangchao Yao Gang Niu Ivor W. Tsang Masashi Sugiyama

Abstract
Learning with noisy labels is one of the hottest problems in weakly-supervised learning. Based on memorization effects of deep neural networks, training on small-loss instances becomes very promising for handling noisy labels. This fosters the state-of-the-art approach "Co-teaching" that cross-trains two deep neural networks using the small-loss trick. However, with the increase of epochs, two networks converge to a consensus and Co-teaching reduces to the self-training MentorNet. To tackle this issue, we propose a robust learning paradigm called Co-teaching+, which bridges the "Update by Disagreement" strategy with the original Co-teaching. First, two networks feed forward and predict all data, but keep prediction disagreement data only. Then, among such disagreement data, each network selects its small-loss data, but back propagates the small-loss data from its peer network and updates its own parameters. Empirical results on benchmark datasets demonstrate that Co-teaching+ is much superior to many state-of-the-art methods in the robustness of trained models.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| learning-with-noisy-labels-on-cifar-100n | Co-Teaching+ | Accuracy (mean): 57.88 |
| learning-with-noisy-labels-on-cifar-10n | Co-Teaching+ | Accuracy (mean): 90.61 |
| learning-with-noisy-labels-on-cifar-10n-1 | Co-Teaching+ | Accuracy (mean): 89.70 |
| learning-with-noisy-labels-on-cifar-10n-2 | Co-Teaching+ | Accuracy (mean): 89.47 |
| learning-with-noisy-labels-on-cifar-10n-3 | Co-Teaching+ | Accuracy (mean): 89.54 |
| learning-with-noisy-labels-on-cifar-10n-worst | Co-Teaching+ | Accuracy (mean): 83.26 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.