2019 IEEE International Conference on Data Mining (ICDM) 2019
DOI: 10.1109/icdm.2019.00107
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Approximate Solution Path Algorithm for Order Weight L_1-Norm with Accuracy Guarantee

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
4
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 18 publications
0
4
0
Order By: Relevance
“…Our goal in this paper is to propose an efficient extra proximal gradient algorithm that employs the Nesterov’s acceleration technique and the extra gradient scheme, and unroll this algorithm into a deep neural network called the extra proximal gradient network (EPGN) to solve a class of inverse problems ( 1 ). Motivated by the least absolute shrinkage and selection operator (LASSO) [ 11 , 12 , 13 ], our EPGN implicitly adopts an -type regularization in ( 1 ) with a nonlinear sparsification mapping learned from data. The proximal operator of this regularization is elaborated by several linear convolutions, nonlinear activation functions, and shrinkage operations for robust sparse feature selection in EPGN.…”
Section: Introductionmentioning
confidence: 99%
“…Our goal in this paper is to propose an efficient extra proximal gradient algorithm that employs the Nesterov’s acceleration technique and the extra gradient scheme, and unroll this algorithm into a deep neural network called the extra proximal gradient network (EPGN) to solve a class of inverse problems ( 1 ). Motivated by the least absolute shrinkage and selection operator (LASSO) [ 11 , 12 , 13 ], our EPGN implicitly adopts an -type regularization in ( 1 ) with a nonlinear sparsification mapping learned from data. The proximal operator of this regularization is elaborated by several linear convolutions, nonlinear activation functions, and shrinkage operations for robust sparse feature selection in EPGN.…”
Section: Introductionmentioning
confidence: 99%
“…However, manually tunning hyperparameters is often timeconsuming, and depends heavily on human's prior knowledge, which makes it difficult to find the optimal hyperparameters. Therefore, to choose the set hyperparameters automatically has attracted great attention, and large amounts of HO methods have emerged, such as grid search, random search (Bergstra et al 2011;Bergstra and Bengio 2012), solution path (Gu and Sheng 2017;Gu, Liu, and Huang 2017;Bao, Gu, and Huang 2019;Gu and Ling 2015), and several Bayesian methods (Thornton et al 2013;Brochu, Cora, and De Freitas 2010;Swersky, Snoek, and Adams 2014;Wu et al 2019).…”
Section: Introductionmentioning
confidence: 99%
“…OWL regression (Bogdan et al, 2013;Zeng & Figueiredo, 2014;Bogdan et al, 2015;Figueiredo & Nowak, 2016;Bao et al, 2019) has emerged as a useful procedure for high-dimensional sparse regression recently, which can promote the sparsity and grouping simultaneously. Unlike group Lasso (Yuan & Lin, 2006) and its variants, OWL regression can identify precise grouping structures of strongly correlated covariates automatically during the learning process without any prior information of feature groups.…”
Section: Introductionmentioning
confidence: 99%