2022
DOI: 10.1016/j.neucom.2020.04.158
|View full text |Cite
|
Sign up to set email alerts
|

Wavelet extreme learning machine and deep learning for data classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 37 publications
(10 citation statements)
references
References 44 publications
0
5
0
Order By: Relevance
“…However, due to the characteristics of massive and fast data streams emerging in the practical application field, it is difficult to obtain them all at once. At the same time, when new data arrives, these batch-processing algorithms continue to retrain new data and discard old models, resulting in a large loss of effective historical data [ 21 ]. Therefore, learning models that can handle data stream environments are receiving more and more attention.…”
Section: Related Workmentioning
confidence: 99%
“…However, due to the characteristics of massive and fast data streams emerging in the practical application field, it is difficult to obtain them all at once. At the same time, when new data arrives, these batch-processing algorithms continue to retrain new data and discard old models, resulting in a large loss of effective historical data [ 21 ]. Therefore, learning models that can handle data stream environments are receiving more and more attention.…”
Section: Related Workmentioning
confidence: 99%
“…These models generally consist of deep artificial neural networks, convolutional neural networks, artificial neural networks and other models. The open-source image processing models using these networks are models such as googleNet, VGG16, VGG19, ResNet and Inception [32,33]. Another difficulty seen in the studies in the literature is related to the image data.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Recently, the ELM has attracted considerable attention from researchers because of its high generalization performance and remarkably fast learning rate compared with traditional methods. The minimal requirement for human intervention is another advantage of the ELM approach, where most parameters can be randomly generated (Yahia et al, 2021). In particular, the ELM can adaptively determine the number of nodes in the hidden layer, randomly assign the input weights and hidden layer biases using an activation function, and obtain output layer weights through the least squares method; these abilities appreciably enhance the learning speed and generalization ability (Ding et al, 2015).…”
Section: Extreme Learning Machinementioning
confidence: 99%