2007
DOI: 10.1007/s10115-007-0107-1
|View full text |Cite
|
Sign up to set email alerts
|

SVM based adaptive learning method for text classification from positive and unlabeled documents

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
26
0
3

Year Published

2010
2010
2021
2021

Publication Types

Select...
6
2
1

Relationship

3
6

Authors

Journals

citations
Cited by 62 publications
(29 citation statements)
references
References 14 publications
0
26
0
3
Order By: Relevance
“…7. We observed that the F-Measures of our proposed incremental SVM method outperform PSOC [29] and SVM which are two typical classi¯cation algorithms. Besides, without clustering, only using classi¯er to guide the topical crawling could get the results in shorter time.…”
Section: The Comparison Of Di®erent Techniquesmentioning
confidence: 73%
See 1 more Smart Citation
“…7. We observed that the F-Measures of our proposed incremental SVM method outperform PSOC [29] and SVM which are two typical classi¯cation algorithms. Besides, without clustering, only using classi¯er to guide the topical crawling could get the results in shorter time.…”
Section: The Comparison Of Di®erent Techniquesmentioning
confidence: 73%
“…This incremental SVM modi¯es SVM [8] formulation so that the¯nal classi¯er has higher precision. In this paper, our previous work 1-DNFII algorithm [29], which considers both the diversity of the feature frequency in positive feature set and unlabeled example set and the absolute frequency of the feature in positive feature set, is used for collecting the positive and negative feature sets. If the frequency of occurrence of a feature in the positive example set is more frequently than in the unlabeled example set, then this feature is regarded as a positive example.…”
Section: Building Incremental Classi¯er For Topical Web Crawlingmentioning
confidence: 99%
“…Afterwards, we used k-means++ to classify all the activities into 10 clusters. Then we adopted the F 1 measure for performance evaluation [24], which is commonly used in text classification. The best topic number Z should correspond to the highest F 1 value.…”
Section: Parameter Learningmentioning
confidence: 99%
“…Aunque el algoritmo resultante es formalmente similar, cada producto vectorial, resultante de la representación cuadrática se sustituye por una función de núcleo nolineal. Los SVM se utilizan ampliamente y han conseguido lograr muy buenos resultados en categorización de contenido textual [29] .…”
Section: Máquinas De Soporte Vectorialunclassified