2019
DOI: 10.1109/access.2019.2915970
|View full text |Cite
|
Sign up to set email alerts
|

Online ADMM-Based Extreme Learning Machine for Sparse Supervised Learning

Abstract: Sparse learning is an efficient technique for feature selection and avoiding overfitting in machine learning research areas. Considering sparse learning for real-world problems with online learning demands in neural networks, an online sparse supervised learning of extreme learning machine (ELM) algorithm is proposed based on alternative direction method of multipliers (ADMM), termed OAL1-ELM. In OAL1-ELM, an 1-regularization penalty is added in loss function for generating a sparse solution to enhance the gen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 48 publications
0
6
0
Order By: Relevance
“…Finally, the transferred features were used to train an adaptive ELM for classification. Song and Li [165] proposed an improved ELM for sparse feature learning named OAL1-ELM. They added l 1 regularization in loss function of ELM, which was solved by alternative direction method of multipliers.…”
Section: Representation/feature Learningmentioning
confidence: 99%
“…Finally, the transferred features were used to train an adaptive ELM for classification. Song and Li [165] proposed an improved ELM for sparse feature learning named OAL1-ELM. They added l 1 regularization in loss function of ELM, which was solved by alternative direction method of multipliers.…”
Section: Representation/feature Learningmentioning
confidence: 99%
“…Therefore, ReOS‐ELM and OS‐RELM are basically the l2$$ {l}_2 $$‐norm modification of the original OS‐ELM algorithm. Another approach is the ADMM based online ELM algorithm (OAL1‐ELM) 11 which applies the l1$$ {l}_1 $$‐norm minimization with ADMM framework. Different from these works, the GPU‐MRO‐ELM algorithm: (1) applies two regularization at the same time in an online setting, combining the benefits of sparsity and stability, (2) compared to the OAL1‐ELM algorithm, MRO‐ELM and GPU‐MRO‐ELM produce joint sparsity with the mixed‐norm regularization, that is, all the elements in a row of the output weight matrix is eliminated rather than individual elements and the resultant neural network is more compact since corresponding neurons are completely eliminated, and (3) GPU acceleration is combined with the optional automatic parallel hyper‐parameter tuning that accelerates the training time and the tuning time of MRO‐ELM.…”
Section: Related Workmentioning
confidence: 99%
“…where α > 0 is a regularization factor. Since the 1 plenty term is not differentiable, the sparse output weights β can be solved by iterative algorithm [38], [49]. This is known as the lasso model [34] and has been researched by many scholars.…”
Section: B Regularized Elmmentioning
confidence: 99%
“…We often add 1 penalty (lasso method [34]), or 2 penalty (ridge regression [35]), or mixtures of the two (elastic net [36]) to overcome these problems. Therefore, the regularized extreme learning machine (RELM) [37] could get the sparse [38]- [40] or stable [41] solutions applying 1 or 2 penalty and prune the structure of the neural networks as well as keep steady using the elastic net penalty [42]- [44].…”
Section: Introductionmentioning
confidence: 99%