2008
DOI: 10.1007/s10115-007-0112-4
|View full text |Cite
|
Sign up to set email alerts
|

DSM-FI: an efficient algorithm for mining frequent itemsets in data streams

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
45
0

Year Published

2009
2009
2017
2017

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 64 publications
(45 citation statements)
references
References 10 publications
0
45
0
Order By: Relevance
“…They proposed two singlepass algorithms, Sticky-Sampling and Lossy Counting, both of which are based on the anti-monotone property; these algorithms provide approximate results with an error bound. Li et al [15,16] proposed DSM-FI and DSM-MFI to mine frequent patterns using a landmark window. Each transaction is converted into k small transactions and inserted into an extended prefix-tree-based summary data structure called the item-suffix frequent itemset forest.…”
Section: Fig 2 Cantree In Lexicographic Ordermentioning
confidence: 99%
“…They proposed two singlepass algorithms, Sticky-Sampling and Lossy Counting, both of which are based on the anti-monotone property; these algorithms provide approximate results with an error bound. Li et al [15,16] proposed DSM-FI and DSM-MFI to mine frequent patterns using a landmark window. Each transaction is converted into k small transactions and inserted into an extended prefix-tree-based summary data structure called the item-suffix frequent itemset forest.…”
Section: Fig 2 Cantree In Lexicographic Ordermentioning
confidence: 99%
“…They also define the first singlepass algorithm for data streams based on the anti-monotonic property. Li et al [19] use an extended prefix-tree-based representation and a top-down frequent itemset discovery scheme. In [29], authors propose a regressionbased algorithm to find frequent itemsets in sliding windows.…”
Section: Related Workmentioning
confidence: 99%
“…Itemset {ABCD} is the superset of itemset {AB} and itemset {CD}. Therefore, we will process function CheckSet in Figure 3.8, lines [13][14][15][16][17][18]. Itemsets {AB} and {CD} become itemset {ABCD}'s child.…”
Section: Data Insertionmentioning
confidence: 99%
“…According to different stream processing models, the research of mining frequent itemsets in data streams can be divided into three categories: landmark windows [15] as shown in Figure 1.2, sliding windows [9,11,12,16] as shown in Figure 1.3, and damped windows [5,18] as shown in Figure 1.4. In the landmark windows model, knowledge discovery is performed based on the values between a specific time stamp called landmark and the present.…”
Section: Window Models In Data Streamsmentioning
confidence: 99%
See 1 more Smart Citation