2008
DOI: 10.1007/978-3-540-88411-8_8
|View full text |Cite
|
Sign up to set email alerts
|

Learning Model Trees from Data Streams

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0
2

Year Published

2010
2010
2021
2021

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 24 publications
(20 citation statements)
references
References 8 publications
0
18
0
2
Order By: Relevance
“…There are several algorithms to grow linear regression trees and classic regression trees using batch (Breiman et al 1984;Quinlan 1992;Torgo 1997;Dobra and Gehrke 2002) or incremental learning approaches (Potts 2004;Ikonomovska and Gama 2008;Ikonomovska et al 2009). …”
Section: Fuzzy Linear Regression Treesmentioning
confidence: 99%
“…There are several algorithms to grow linear regression trees and classic regression trees using batch (Breiman et al 1984;Quinlan 1992;Torgo 1997;Dobra and Gehrke 2002) or incremental learning approaches (Potts 2004;Ikonomovska and Gama 2008;Ikonomovska et al 2009). …”
Section: Fuzzy Linear Regression Treesmentioning
confidence: 99%
“…To the best of our knowledge, there exists no incremental algorithm for learning multi-target regression or model trees. We then briefly summarize the FIMT algorithm [9], which is the basis of the proposed FIMT-MT algorithm.…”
Section: Related Workmentioning
confidence: 99%
“…The algorithm extends the incremental single-target model tree proposed in [9] by adopting the principles of the predictive-clustering methodology in the split selection criterion. The linear models are computed using a lightweight approach based on incremental training of perceptrons in the leaves of the tree.…”
Section: Introductionmentioning
confidence: 99%
“…Regarding memory management, Maloof and Michalski [5] suggested that learning algorithms follow one of these three possibilities for the memory model when dealing with past training examples: No instance memory, in which the incremental learner retains no examples in memory but instead statistics about them (e.g., FIMT and FIRT-DD algorithms [6], [7], Online-RD/RA algorithms [8], and neural networks build this kind regression models), Full instance memory, in which the method retains all past training examples (e.g., M5 [9], CART [10], and IBk [11] are regression algorithms following this approach), and Partial instance memory, which is a strategy mainly oriented to dealing with changes in underlying patterns, and learning algorithms retain some of the past training examples that are within a window. The size of this window may be fixed or adaptive.…”
Section: Related Workmentioning
confidence: 99%