2022
DOI: 10.1002/sta4.482
|View full text |Cite
|
Sign up to set email alerts
|

Minimax optimal high‐dimensional classification using deep neural networks

Abstract: High‐dimensional classification is a fundamentally important research problem in high‐dimensional data analysis. In this paper, we derive a nonasymptotic rate for the minimax excess misclassification risk when feature dimension exponentially diverges with the sample size and the Bayes classifier possesses a complicated modular structure. We also show that classifiers based on deep neural networks can attain the above rate, hence, are minimax optimal.

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 25 publications
0
1
0
Order By: Relevance
“…It is interesting to explore whether minimax optimal DNN approaches with dimension-free properties can be established in the classification setting. Existing literature on this front is limited, and most works either treat classification as regression by estimating the conditional class probability instead of the decision boundary Kohler and Langer, 2020;Bos and Schmidt-Hieber, 2021;Hu et al, 2021;Wang et al, 2022b,a;Wang and Shang, 2022) or settle for an upper bound on the misclassification risk (Kim et al, 2021;Steinwart et al, 2007;Hamm and Steinwart, 2020). Unlike regression problems where one intends to estimate the unknown regression functions, the goal of classification is to recover the unknown decision boundary separating different classes.…”
Section: Introductionmentioning
confidence: 99%
“…It is interesting to explore whether minimax optimal DNN approaches with dimension-free properties can be established in the classification setting. Existing literature on this front is limited, and most works either treat classification as regression by estimating the conditional class probability instead of the decision boundary Kohler and Langer, 2020;Bos and Schmidt-Hieber, 2021;Hu et al, 2021;Wang et al, 2022b,a;Wang and Shang, 2022) or settle for an upper bound on the misclassification risk (Kim et al, 2021;Steinwart et al, 2007;Hamm and Steinwart, 2020). Unlike regression problems where one intends to estimate the unknown regression functions, the goal of classification is to recover the unknown decision boundary separating different classes.…”
Section: Introductionmentioning
confidence: 99%