2022
DOI: 10.5705/ss.202019.0315
|View full text |Cite
|
Sign up to set email alerts
|

Elastic-net Regularized High-dimensional Negative Binomial Regression: Consistency and Weak Signal Detection

Abstract: We study sparse negative binomial regression (NBR) for count data by showing non-asymptotic merits of the Elastic-net estimator. Two types of oracle inequalities are derived for the Elastic-net estimates of NBR by utilizing Compatibility Factor or Stabil Condition. The second-type oracle inequality is for random design which can be extended to many 1 + 2 regularized M-estimation with the corresponding empirical process having stochastic Lipschitz properties. To show some high probability events, we derive conc… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 19 publications
(19 citation statements)
references
References 40 publications
1
18
0
Order By: Relevance
“…where the last inequality is from β −β * 1 ≤ 4e 5LB k sλ. For general losses beyond linear models, the crucial techniques in the nonasymptotical analysis of increasing-dimensional and high-dimensional regressions, which are Bahadur representation's for the M-estimator [49,64] and concentration for Lipschitz loss functions [18,98], respectively. In large-dimensional regressions with p n → c, the theory of random matrix [93], leave-one-out analysis [27,53] and approximate message passing [23,26,27] play important roles for obtaining asymptotical results.…”
Section: High-dimensional Poisson Regressions With Random Designmentioning
confidence: 99%
“…where the last inequality is from β −β * 1 ≤ 4e 5LB k sλ. For general losses beyond linear models, the crucial techniques in the nonasymptotical analysis of increasing-dimensional and high-dimensional regressions, which are Bahadur representation's for the M-estimator [49,64] and concentration for Lipschitz loss functions [18,98], respectively. In large-dimensional regressions with p n → c, the theory of random matrix [93], leave-one-out analysis [27,53] and approximate message passing [23,26,27] play important roles for obtaining asymptotical results.…”
Section: High-dimensional Poisson Regressions With Random Designmentioning
confidence: 99%
“…More adaptively, we prefer to use the weighted restriction , where the weights ’s are data-dependent and will be specified later. From [ 23 ], we add Elastic-net penality with tuning parameter c , which is in regards to the measurement errors (see [ 24 , 25 ]) for a similar purpose. We would have in the situation without measurement errors.…”
Section: Density Estimationmentioning
confidence: 99%
“… ; (H.3): . (H.2) is an assumption in sparse estimation, and the assumption (H.3) is a classical compact parameter space assumption in sparse high-dimensional regressions (see [ 9 , 25 ]).…”
Section: Sparse Mixture Density Estimationmentioning
confidence: 99%
See 1 more Smart Citation
“…To achieve model selection consistency or variable screening property for a high-dimensional problem, one common condition is the "beta-min" condition, which requires that the nonzero regression coefficients are sufficiently large (Zhao and Yu, 2006;Huang and Xie, 2007;Van de Geer et al, 2011;Tibshirani, 2011;Zhang and Jia, 2017). Therefore, classical methods for variable selection often focus on strong signals that satisfy such condition.…”
Section: Introductionmentioning
confidence: 99%