2019
DOI: 10.1080/03610926.2018.1500604
|View full text |Cite
|
Sign up to set email alerts
|

Overlapping group lasso for high-dimensional generalized linear models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 22 publications
0
4
0
Order By: Relevance
“…Lemma 1 extends the support recovery outcome of the Thresholded Lasso Bandit, as stated in [4], to the scenario of multiple agents exchanging information among each other. The reliance on s 0 instead of d is similar to that of the offline result (as Theorem 3.1 of [32]) and the bandit setting illustrated in Lemma 5.4 of [4]. Our thresholding approach, combined with the allowance of agents to share their estimated sets, facilitates a more precise dimension reduction through the learning process, effectively removing the reliance on d for estimation error when t exceeds 2 log 2d 2 /C 2 0 .…”
Section: Decentralized Peer-to-peer Frameworkmentioning
confidence: 71%
See 1 more Smart Citation
“…Lemma 1 extends the support recovery outcome of the Thresholded Lasso Bandit, as stated in [4], to the scenario of multiple agents exchanging information among each other. The reliance on s 0 instead of d is similar to that of the offline result (as Theorem 3.1 of [32]) and the bandit setting illustrated in Lemma 5.4 of [4]. Our thresholding approach, combined with the allowance of agents to share their estimated sets, facilitates a more precise dimension reduction through the learning process, effectively removing the reliance on d for estimation error when t exceeds 2 log 2d 2 /C 2 0 .…”
Section: Decentralized Peer-to-peer Frameworkmentioning
confidence: 71%
“…Recently, studies on sparse linear bandits overcome that limitation [24,4], which do not require any prior information about the sparse parameter of the model. More-over, thresholding has become a natural and efficient way to feature selection in online and offline learning [4,32,25], which achieves excellent performance in sparse linear bandit. Thus, establishing a dependable interval for the threshold value is necessary and crucial for the algorithm to operate effectively.…”
Section: Related Workmentioning
confidence: 99%
“…Hence, the expectile index τ will be fixed throughout in this section, such that E[g τ (ε i )] = 0. Assumption (A2) is commonly considered in high-dimensional models when the number of parameters diverges with n (Wang and Wang (2014), Zhao et al (2018), Wang and Tian (2019), Ciuperca (2021), Hu et al (2021), Zhou et al (2019)).…”
Section: Adaptive Group Lasso Expectile Estimatormentioning
confidence: 99%
“…Based on the PCA and PLS n 1/2 -consistent estimators corresponding to adaptive weights, Mendez-Civieta et al (2021) study the sparsity of the adaptive group LASSO quantile estimator. On the other hand, Wang and Wang (2014) study the adaptive LASSO estimators for generalized linear model (GLM), while Wang and Tian (2019) consider the grouped variables for a GLM, their results including the case p > n. Zhou et al (2019) prove the oracle inequalities for the estimation and prediction error of overlapping group Lasso method in the GLMs. Concerning the computational aspects, when the LS loss function is penalized with L 1 -norm for the subgroup of coefficients and with L 2 -norm or L 1 -norm for fused coefficient subgroups, Dondelinger and Mukherjee (2020) present two co-ordinate descendent algorithm for calculating the corresponding estimators.…”
Section: Introductionmentioning
confidence: 99%