2021
DOI: 10.1007/s00521-021-06035-1
|View full text |Cite
|
Sign up to set email alerts
|

Supervised learning in the presence of concept drift: a modelling framework

Abstract: We present a modelling framework for the investigation of supervised learning in non-stationary environments. Specifically, we model two example types of learning systems: prototype-based learning vector quantization (LVQ) for classification and shallow, layered neural networks for regression tasks. We investigate so-called student–teacher scenarios in which the systems are trained from a stream of high-dimensional, labeled data. Properties of the target task are considered to be non-stationary due to drift pr… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 45 publications
0
3
0
Order By: Relevance
“…In such circumstances, the learning system can identify and monitor the perception of drift, i.e., forgetting inappropriate and old information while constantly adjusting to the latest additional contributions. Concept drift has been the subject of previous studies [9,10] looking at its theoretical characteristics and statistical mechanics. This study aims to learn a regression technique that can be used in various situations through practical simulations of scenarios.…”
Section: Introductionmentioning
confidence: 99%
“…In such circumstances, the learning system can identify and monitor the perception of drift, i.e., forgetting inappropriate and old information while constantly adjusting to the latest additional contributions. Concept drift has been the subject of previous studies [9,10] looking at its theoretical characteristics and statistical mechanics. This study aims to learn a regression technique that can be used in various situations through practical simulations of scenarios.…”
Section: Introductionmentioning
confidence: 99%
“…The ADO-based hyperparameter tuning procedure [16] is used to find the DNN model's ideal parameters to improve the classifier performance. Liu et al [17] have demonstrated how the current data augmentation techniques either neglect the distribution of data or the spatial relationships among the features.…”
Section: Literature Surveymentioning
confidence: 99%
“…This is relevant in ML engines that continuously receive newly incoming data to process, which could be set up with health care, education, or consumer data (although admittedly not widely available in social and health sciences research at the time of writing). An example of concept drift over time would be the lowering of predictive power of poverty to explain children’s health when the societal determinants of poverty change over time; see an accessible introduction in ( 33 ) and recent developments in ( 34 ). While most research in the social and health sciences deals with static datasets to date, the importance of this concept will increase with the development of ML engines that continuously process newly incoming data, such as through tracking apps or social network data.…”
Section: Basics Of MLmentioning
confidence: 99%