2018
DOI: 10.3150/16-bej893
|View full text |Cite
|
Sign up to set email alerts
|

Optimal adaptive inference in random design binary regression

Abstract: We construct confidence sets for the regression function in nonparametric binary regression with an unknown design density-a nuisance parameter in the problem. These confidence sets are adaptive in L 2 loss over a continuous class of Sobolev type spaces. Adaptation holds in the smoothness of the regression function, over the maximal parameter spaces where adaptation is possible, provided the design density is smooth enough. We identify two key regimes -one where adaptation is possible, and one where some criti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 56 publications
0
7
0
Order By: Relevance
“…However, because IF 22,k − IF 22,k j is not an exact second-order degenerate U-statistic, to use such exponential tail bounds for hypothesis testing purposes requires a more careful analysis to obtain constants that can be estimated from data. Results along this line can be found in Mukherjee et al (2016); Mukherjee and Sen (2018) but more careful analysis is needed to obtain explicit constants. (3) It is possible to refine the strategy for multiple testing adjustment considered above, in which we distribute the desired overall type-I error uniformly to every J − j actually tested hypotheses associated with the hypothesis of actual interest H 0,2,k j (δ) (4.1).…”
Section: Answer To Question (3)mentioning
confidence: 83%
“…However, because IF 22,k − IF 22,k j is not an exact second-order degenerate U-statistic, to use such exponential tail bounds for hypothesis testing purposes requires a more careful analysis to obtain constants that can be estimated from data. Results along this line can be found in Mukherjee et al (2016); Mukherjee and Sen (2018) but more careful analysis is needed to obtain explicit constants. (3) It is possible to refine the strategy for multiple testing adjustment considered above, in which we distribute the desired overall type-I error uniformly to every J − j actually tested hypotheses associated with the hypothesis of actual interest H 0,2,k j (δ) (4.1).…”
Section: Answer To Question (3)mentioning
confidence: 83%
“…i ), slightly abusing our notations. The minimax properties and adaptive estimators for binary regression in the classical non-distributed setting are studied for instance in Mukherjee and Sen (2018). Binary regression is an example of a model where the log-likelihood-ratios W (k) d are bounded.…”
Section: Binary Regressionmentioning
confidence: 99%
“…Later on, self-similarity was also used by Giné and Nickl (2010) to construct confidence bounds over finite intervals. Aside from these two works, the self-similarity condition has also been used in other applications, including high-dimensional sparse signal estimation (Nickl and van de Geer 2013), binary regression (Mukherjee and Sen 2018), and L p -confidence sets (Nickl and Szabó 2016), to mention but a few.…”
Section: Related Literaturementioning
confidence: 99%