Proceedings of the 2009 SIAM International Conference on Data Mining 2009
DOI: 10.1137/1.9781611972795.23
|View full text |Cite
|
Sign up to set email alerts
|

Positive Unlabeled Learning for Data Stream Classification

Abstract: Learning from positive and unlabeled examples (PU learning) has been investigated in recent years as an alternative learning model for dealing with situations where negative training examples are not available. It has many real world applications, but it has yet to be applied in the data stream environment where it is highly possible that only a small set of positive data and no negative data is available. An important challenge is to address the issue of concept drift in the data stream environment, which is … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
77
0
7

Year Published

2010
2010
2022
2022

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 100 publications
(85 citation statements)
references
References 30 publications
1
77
0
7
Order By: Relevance
“…The proposed one-pass incremental change detection algorithm was applied to the detection on video-shot changes. Li et al (2009b) proposed a novel PU (Positive and Unlabelled examples) learning technique LELC (PU Learning by Extracting Likely positive and negative microClusters). It only requires a small set of positive examples and a set of unlabelled examples to build accurate classifiers.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The proposed one-pass incremental change detection algorithm was applied to the detection on video-shot changes. Li et al (2009b) proposed a novel PU (Positive and Unlabelled examples) learning technique LELC (PU Learning by Extracting Likely positive and negative microClusters). It only requires a small set of positive examples and a set of unlabelled examples to build accurate classifiers.…”
Section: Literature Reviewmentioning
confidence: 99%
“…For the methods in the first category [26,27,28], they are developed mainly for document-related one-class classification problems. Their main task is to extract negative samples from unlabeled examples (if unlabeled examples are offered) and to construct a binary classifier using target samples and extracted negative samples.…”
Section: One-class Learningmentioning
confidence: 99%
“…This setting is also known as Positive and Unlabeled (PU) learning, which aims to learn from data with only positive and unlabeled examples. It has been shown to be useful in many data mining tasks [8], [9], [10], [11], such as text mining, uncertain data mining, stream mining, etc. Formally, the graph PU learning problem corresponds to training a model to identify a subset of unlabeled graphs that are most likely to be negative graphs.…”
Section: Introductionmentioning
confidence: 99%
“…Conventional PU learning approaches can identify a group of reliable negative examples in the vector space. These approaches assume that the full set of useful features is available [9], [10], [11]. However, graph data are not directly represented in a feature space, and they require an additional subgraph feature mining process by evaluating the subgraph patterns in a graph data set.…”
Section: Introductionmentioning
confidence: 99%