2013
DOI: 10.3390/e15104159
|View full text |Cite
|
Sign up to set email alerts
|

Learning Entropy: Multiscale Measure for Incremental Learning

Abstract: First, this paper recalls a recently introduced method of adaptive monitoring of dynamical systems and presents the most recent extension with a multiscale-enhanced approach. Then, it is shown that this concept of real-time data monitoring establishes a novel non-Shannon and non-probabilistic concept of novelty quantification, i.e., Entropy of Learning, or in short the Learning Entropy. This novel cognitive measure can be used for evaluation of each newly measured sample of data, or even of whole intervals. Th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
31
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 23 publications
(31 citation statements)
references
References 47 publications
0
31
0
Order By: Relevance
“…This is justified by the notion that the prediction error of the model carries the information about the size of inaccuracy, while the adaptive weight increments carry information about how much the model tried to adapt on new data. Thus, even when the model error may be high, this may not necessarily be related with the adaptive weight increments (for more on this principle, please refer to the works of [20]- [22]). In case of a high error value, caused by common phenomena (what is out of the models capability to learn), novelty detection doesn't mark those particular samples, because the model has already recognized its inability to learn, which also further justifies the functionality of this method for novelty detection.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…This is justified by the notion that the prediction error of the model carries the information about the size of inaccuracy, while the adaptive weight increments carry information about how much the model tried to adapt on new data. Thus, even when the model error may be high, this may not necessarily be related with the adaptive weight increments (for more on this principle, please refer to the works of [20]- [22]). In case of a high error value, caused by common phenomena (what is out of the models capability to learn), novelty detection doesn't mark those particular samples, because the model has already recognized its inability to learn, which also further justifies the functionality of this method for novelty detection.…”
Section: Discussionmentioning
confidence: 99%
“…the Adaptation Plot [20] that has been recently enhanced with multi-scale approach [21]. A most recent method is the Learning Entropy, i.e., a multi-scale approach to evaluation of unusual behaviour of adaptive parameters of a learning model is introduced in [22]. This paper however introduces another, different approach to novelty detection, that is neither based on statistical approaches, nor is it based on evaluation of error residual.…”
Section: Introductionmentioning
confidence: 99%
“…The first theoretical formula for estimating LE was proposed as a multifractal (multiscale) measure for the slopes of power‐law quantification in a log‐log plot . Later on, cumulative sums were proposed for practical approximations of LE, ie, LE(k)=1nα·nwαi=1nwΓ|wi|>α·|wi|true‾, where Γ= 1,if3ptfalse|boldwifalse|>α·truefalse|boldwifalse|0,otherwise is an indicator function, and truefalse|boldwifalse| is the mean absolute value of the recent history of the i th weight adaptation increment (for details, see Section 3) and α is the multiscale sensitivity parameter α ∈ { α 1 , α 2 ,…, α n }, α 1 < α 2 < ⋯ < α n .…”
Section: Learning Entropymentioning
confidence: 99%
“…The first theoretical formula for estimating LE was proposed as a multifractal (multiscale) measure for the slopes of power-law quantification in a log-log plot. 13 Later on, cumulative sums were proposed for practical approximations of LE, ie,…”
Section: Learning Entropymentioning
confidence: 99%
See 1 more Smart Citation