In this paper, we study the classi cation problem in which we have access to easily obtainable surrogate for true labels, namely complementary labels, which specify classes that observations do not belong to. Let Y andȲ be the true and complementary labels, respectively. We rst model the annotation of complementary labels via transition probabilities P (Ȳ = i|Y = j), i = j ∈ {1, · · · , c}, where c is the number of classes. Previous methods implicitly assume that P (Ȳ = i|Y = j), ∀i = j, are identical, which is not true in practice because humans are biased toward their own experience. For example, as shown in Figure 1, if an annotator is more familiar with monkeys than prairie dogs when providing complementary labels for meerkats, she is more likely to employ "monkey" as a complementary label. We therefore reason that the transition probabilities will be di erent. In this paper, we propose a framework that contributes three main innovations to learning with biased complementary labels: (1) It estimates transition probabilities with no bias. (2) It provides a general method to modify traditional loss functions and extends standard deep neural network classi ers to learn with biased complementary labels. (3) It theoretically ensures that the classi er learned with complementary labels converges to the optimal one learned with true labels. Comprehensive experiments on several benchmark datasets validate the superiority of our method to current state-of-the-art methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.