2019
DOI: 10.1007/s00500-019-04499-x
|View full text |Cite
|
Sign up to set email alerts
|

An ensemble based on neural networks with random weights for online data stream regression

Abstract: Most information sources in the current technological world are generating data sequentially and rapidly, in the form of data streams. The evolving nature of processes may often cause changes in data distribution, also known as concept drift, which is difficult to detect and causes loss of accuracy in supervised learning algorithms. As a consequence, online machine learning algorithms that are able to update actively according to possible changes in the data distribution are required. Although many strategies … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(9 citation statements)
references
References 45 publications
0
9
0
Order By: Relevance
“…Basic regression models are SGD-Regressor and Passive-Aggressive-Regressor 3 . Pruning strategy is the worst first [3].…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Basic regression models are SGD-Regressor and Passive-Aggressive-Regressor 3 . Pruning strategy is the worst first [3].…”
Section: Discussionmentioning
confidence: 99%
“…Data is accessed only once and then discarded to limit memory and storage space usages [9]. -Batch-by-Batch update has better stability than instance-by-instance [3,8] and is less sensitive to inaccurate data [3]. -Implicit method is more suitable than the explicit one (such as concept detection) in noisy data streams [20], because the latter may cause too many false alarms [8,20].…”
Section: Algorithm 2 Dtom Assessment and Updatementioning
confidence: 99%
See 1 more Smart Citation
“…It improves the overall predictive performance, decreases the risk of obtaining a local minimum and provides a better fit to the data space by combining the predictions of several weak learners into a strong learning algorithm. In this study, we applied bagging and boosting machine learning because they are widely used effective approaches for constructing ensemble learning algorithms [76][77][78][79]. Bagging is a technique that utilizes bootstrap sampling to reduce the variance of a decision tree and improve the accuracy of a learning algorithm by creating a collection of learning algorithms that are learned in parallel [79][80][81].…”
Section: Classificationmentioning
confidence: 99%
“…For our experiments, the evolutionary operators, a) ranking selection, b) uniform crossover, and c) random mutation were used [76,77]. In addition, we applied elite strategy selection, while a probability of crossover P c and mutation P m was assigned to each population.…”
Section: Structure Optimization Genetic Algorithm (Soga)mentioning
confidence: 99%