2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00971
|View full text |Cite
|
Sign up to set email alerts
|

Human Uncertainty Makes Classification More Robust

Abstract: The classification performance of deep neural networks has begun to asymptote at near-perfect levels. However, their ability to generalize outside the training set and their robustness to adversarial attacks have not. In this paper, we make progress on this problem by training with full label distributions that reflect human perceptual uncertainty. We first present a new benchmark dataset which we call CIFAR10H, containing a full distribution of human labels for each image of the CIFAR10 test set. We then show… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
206
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 144 publications
(208 citation statements)
references
References 41 publications
2
206
0
Order By: Relevance
“…One natural extension here would be to investigate the feasibility of using a machine learning dataset with lower-level category labels; for example, a subset of the ImageNet database 58 . Beyond their interest for psychology, human behavioral data for these domains provides a rich yet largely unexploited training signal for computer vision systems, as we recently demonstrated by using such information to improve the generalization performance and robustness of natural image classifiers 65 . More broadly, these results highlight the potential of a new paradigm for psychological research that draws on the increasingly abundant datasets, machine learning tools, and behavioral data available online, rather than procuring them for individual experiments at heavy computational and experimental cost.…”
Section: Discussionmentioning
confidence: 99%
“…One natural extension here would be to investigate the feasibility of using a machine learning dataset with lower-level category labels; for example, a subset of the ImageNet database 58 . Beyond their interest for psychology, human behavioral data for these domains provides a rich yet largely unexploited training signal for computer vision systems, as we recently demonstrated by using such information to improve the generalization performance and robustness of natural image classifiers 65 . More broadly, these results highlight the potential of a new paradigm for psychological research that draws on the increasingly abundant datasets, machine learning tools, and behavioral data available online, rather than procuring them for individual experiments at heavy computational and experimental cost.…”
Section: Discussionmentioning
confidence: 99%
“…56,123,124 A prime motive for trying to integrate cognitively motivated category structure back into CNN learning paradigms is, then, that it may help provide a solution to this new frontier of problems. Peterson et al 125 attempted to do so indirectly by training a range of CNNs to predict the human categorization judgments ("human" or "soft" labels) from the CIFAR-10H dataset. They found that these CNNs performed significantly better on a number of outof-sample natural image sets than CNNs trained on CIFAR-10, while scoring the same on the hardlabel validation set (Fig.…”
Section: Similaritymentioning
confidence: 99%
“…It appeared as though training with human labels endowed networks with more tolerance to noise and more graceful degradation: exactly the current aims of the computer vision community, and those properties originally sought by early artificial intelligence researchers. Finally, Peterson et al 125 showed that CNNs trained with human labels invariably performed better than alternative strategies, which either incorporate random label noise or train on convex combinations of image-label pairs. 126 This shows that the structure contained in human labels is helpful for classification beyond the regularization effects of adding training noise.…”
Section: Similaritymentioning
confidence: 99%
“…However, to our best knowledge, very little work has been reported on the comprehensive interpretation of deep learning methods, especially for COVID-19 diagnosis. In recent years, the deep learning methods arise and have been widely used in a wide range of machine learning tasks, such as image classification [18][19][20], image segmentation [21,22], natural language processing [23,24], and etc. Due to the good performance of deep learning methods, it has been utilized for resolving the problem of automatic COVID-19 analysis by many researchers.…”
Section: Introductionmentioning
confidence: 99%