Proceedings of the 2004 ACM Symposium on Applied Computing 2004
DOI: 10.1145/967900.968036
|View full text |Cite
|
Sign up to set email alerts
|

Discovering decision rules from numerical data streams

Abstract: This paper presents a scalable learning algorithm to classify numerical, low dimensionality, high-cardinality, time-changing data streams. Our approach, named SCALLOP, provides a set of decision rules on demand which improves its simplicity and helpfulness for the user. SCALLOP updates the knowledge model every time a new example is read, adding interesting rules and removing out-of-date rules. As the model is dynamic, it maintains the tendency of data. Experimental results with synthetic data streams show a g… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2005
2005
2017
2017

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 25 publications
(16 citation statements)
references
References 12 publications
0
16
0
Order By: Relevance
“…The rules discovery process must be methodical and exhaustive, akin to the requirements gathering phase of the system development life cycle. Many researchers have studied and proposed algorithms for learning and discovering new and significant organizational rules (Apte, 1994;Ferrer-Troyano, 2004).…”
Section: Related Literaturementioning
confidence: 99%
“…The rules discovery process must be methodical and exhaustive, akin to the requirements gathering phase of the system development life cycle. Many researchers have studied and proposed algorithms for learning and discovering new and significant organizational rules (Apte, 1994;Ferrer-Troyano, 2004).…”
Section: Related Literaturementioning
confidence: 99%
“…This paper describes an incremental learning algorithm that provides a set of decision rules induced from numerical data streams. Our proposal extends previous work [1] by filtering border examples that lie near to decision boundaries, so that every rule may retain a particular set of positive and negative examples. This information makes possible to ignore false alarms with respect to virtual drifts and avoid hasty modifications.…”
Section: Introductionmentioning
confidence: 84%
“…, y z } be the set of class labels. Let e i = ( − → x i , y i ) be the new i th training example arriving, where − → x i is a normalized vector in [0,1] m and y i is a discrete value in Y . A decision rule r is given by a set of m closed intervals [I jl , Iju] (j ∈ {1, .…”
Section: Moderate Generalizationmentioning
confidence: 99%
See 1 more Smart Citation
“…The main thrust on data stream mining in the context of classification has been that of one-pass mining [8], [13], [16], [18], [23]. In reality, the nature of the underlying changes in the data stream can impose considerable challenges.…”
Section: Introductionmentioning
confidence: 99%