2019
DOI: 10.48550/arxiv.1903.10063
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Optimal Linear Discriminators For The Discrete Choice Model In Growing Dimensions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
5
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(5 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…This assumption can be thought as an analogue of the restricted eigenvalue assumption frequently used in the analysis of the high dimensional linear model (especially LASSO, see e.g. [6]) to obtain the estimation error from the prediction error and was also used in earlier work by the authors [24], where it was shown that the condition is satisfied by several classes of distributions (e.g. under elliptical symmetry, log-concavity of densities).…”
Section: When D/n →mentioning
confidence: 99%
See 4 more Smart Citations
“…This assumption can be thought as an analogue of the restricted eigenvalue assumption frequently used in the analysis of the high dimensional linear model (especially LASSO, see e.g. [6]) to obtain the estimation error from the prediction error and was also used in earlier work by the authors [24], where it was shown that the condition is satisfied by several classes of distributions (e.g. under elliptical symmetry, log-concavity of densities).…”
Section: When D/n →mentioning
confidence: 99%
“…One can relax this assumption with an additionally showing that the estimators are consistent. To prove Theorem 3.10 we use Theorem A.3 of [24]. To match our notation with that theorem, here our loss function γ(θ, •) is:…”
Section: Asymptotic Distributionmentioning
confidence: 99%
See 3 more Smart Citations