2021
DOI: 10.48550/arxiv.2110.08418
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Nuances in Margin Conditions Determine Gains in Active Learning

Abstract: We consider nonparametric classification with smooth regression functions, where it is well known that notions of margin in E[Y |X] determine fast or slow rates in both active and passive learning. Here we elucidate a striking distinction between the two settings. Namely, we show that some seemingly benign nuances in notions of margin-involving the uniqueness of the Bayes classifier, and which have no apparent effect on rates in passive learning-determine whether or not any active learner can outperform passiv… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(6 citation statements)
references
References 7 publications
0
6
0
Order By: Relevance
“…The parameter ζ 0 should be considered as a extremely small quantity, where the extreme case corresponds to ζ 0 = 0 (which still allow arbitrary probability for region {x ∈ X : η(x) = 1/2}). The special case with ζ 0 = 0 is recently studied in (Kpotufe et al, 2021), where the authors conclude that no active learners can outperform the passive counterparts in the nonparametric regime. We show next, in the parametric regime (with function approximation), active learning with proper abstention (Algorithm 1) overcomes these noise-seeking conditions.…”
Section: Abstention To Avoid Noise-seekingmentioning
confidence: 99%
“…The parameter ζ 0 should be considered as a extremely small quantity, where the extreme case corresponds to ζ 0 = 0 (which still allow arbitrary probability for region {x ∈ X : η(x) = 1/2}). The special case with ζ 0 = 0 is recently studied in (Kpotufe et al, 2021), where the authors conclude that no active learners can outperform the passive counterparts in the nonparametric regime. We show next, in the parametric regime (with function approximation), active learning with proper abstention (Algorithm 1) overcomes these noise-seeking conditions.…”
Section: Abstention To Avoid Noise-seekingmentioning
confidence: 99%
“…The unit ball in the Sobolev space is defined as W α,∞ 1 (X ) := {f : f W α,∞ ≤ 1}. Following the convention of nonparametric active learning (Castro and Nowak, 2008;Minsker, 2012;Locatelli et al, 2017Locatelli et al, , 2018Shekhar et al, 2021;Kpotufe et al, 2021), we assume X = [0, 1] d and η ∈ W α,∞ 1 (X ) (except in Section 4).…”
Section: Problem Settingmentioning
confidence: 99%
“…These work mainly focus on the parametric regime (e.g., learning with a set of linear classifiers), and their label complexities rely on the boundedness of the so-called disagreement coefficient (Hanneke, 2007(Hanneke, , 2014Friedman, 2009). Active learning in the nonparametric regime has been analyzed in Castro and Nowak (2008); Minsker (2012); Locatelli et al (2017Locatelli et al ( , 2018; Kpotufe et al (2021). These algorithms rely on partitioning of the input space X ⊆ [0, 1] d into exponentially (in dimension) many small cubes, and then conduct local mean (or some higher-order statistics) estimation within each small cube.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations