2021
DOI: 10.1007/s10994-021-06063-x
|View full text |Cite
|
Sign up to set email alerts
|

Quick and robust feature selection: the strength of energy-efficient sparse training for autoencoders

Abstract: Major complications arise from the recent increase in the amount of high-dimensional data, including high computational costs and memory requirements. Feature selection, which identifies the most relevant and informative attributes of a dataset, has been introduced as a solution to this problem. Most of the existing feature selection methods are computationally inefficient; inefficient algorithms lead to high energy consumption, which is not desirable for devices with limited computational and energy resources… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 30 publications
0
4
0
Order By: Relevance
“…Feature selection aims to find the most relevant features from the input and is studied extensively in the machine-learning literature. Approaches can be grouped according to whether labeled data is used-supervised (Nie et al, 2010) or unsupervised (Ball & Hall, 1965;Hilborn & Lainiotis, 1967;He et al, 2005;Balın et al, 2019;Atashgahi et al, 2020)-or what high-level approach is takenfilter methods (Blum & Langley, 1997), wrapper methods (Kohavi & John, 1997), or embedded methods (Yuan & Lin, 2006). Most relevant to our work are embedded supervised methods as they have good scaling properties while achieving the best performance which is vital in our setting with over a million features.…”
Section: Related Workmentioning
confidence: 99%
“…Feature selection aims to find the most relevant features from the input and is studied extensively in the machine-learning literature. Approaches can be grouped according to whether labeled data is used-supervised (Nie et al, 2010) or unsupervised (Ball & Hall, 1965;Hilborn & Lainiotis, 1967;He et al, 2005;Balın et al, 2019;Atashgahi et al, 2020)-or what high-level approach is takenfilter methods (Blum & Langley, 1997), wrapper methods (Kohavi & John, 1997), or embedded methods (Yuan & Lin, 2006). Most relevant to our work are embedded supervised methods as they have good scaling properties while achieving the best performance which is vital in our setting with over a million features.…”
Section: Related Workmentioning
confidence: 99%
“…In an AE, the data are reconstructed after computations based on the concept of learning the best features and matching the output to the input as closely as feasible [45]. Stacked AEs are one of the AE types designed for automated feature selection [46]. In an SAE, the output of the first AE is the input of the second one.…”
Section: Automated Feature Selection With Stacked Autoencodermentioning
confidence: 99%
“…In Mostafa and Wang (2019), the authors proposed the idea of parameter reallocation automatically across layers during sparse training in CNNS. Many works have further studied sparse training concept recently (Atashgahi et al, 2022;Gordon et al, 2018;Liu et al, 2020Liu et al, , 2021a.…”
Section: Sparse-to-sparsementioning
confidence: 99%
“…Sparse neural networks have been considered as an effective solution to address these challenges (Hoefler et al, 2021;Mocanu et al, 2021). By using sparsely connected layers instead of fully-connected ones, sparse neural networks have reached a competitive performance to their dense equivalent networks in various applications (Frankle & Carbin, 2018;Atashgahi et al, 2022), while having much fewer parameters. It has been shown that biological brains, especially the human brain, enjoy sparse connections among neurons (Friston, 2008).…”
Section: Introductionmentioning
confidence: 99%