2018
DOI: 10.1007/s11277-018-5416-z
|View full text |Cite
|
Sign up to set email alerts
|

A Sample Extension Method Based on Wikipedia and Its Application in Text Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 13 publications
0
6
0
Order By: Relevance
“…Using Sohu News and texts from Fudan University datasets, Zhu et al. ( 2018 ) outperformed the baseline method with 30% of sample expansion. Based on the expansion of 100 samples, WSE with Naive Bayes achieved the best result with an F-measure of 72.5% approximately in the Sohu dataset.…”
Section: Results Analysis Per Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…Using Sohu News and texts from Fudan University datasets, Zhu et al. ( 2018 ) outperformed the baseline method with 30% of sample expansion. Based on the expansion of 100 samples, WSE with Naive Bayes achieved the best result with an F-measure of 72.5% approximately in the Sohu dataset.…”
Section: Results Analysis Per Datasetsmentioning
confidence: 99%
“…In a Chinese text classification for news, Zhu et al. ( 2018 ) developed a method based on Wikipedia sample extension (WSE). A network graph was constructed with concepts and their links extracted from Wikipedia.…”
Section: Semi-supervised Learning For Text Classificationmentioning
confidence: 99%
“…Naïve Bayes; Support Vector Machine [92] It uses the knowledge of Wikipedia to extend the training samples, which is realized by network graph construction. Naïve Bayes; Support Vector Machine; Random Forest [93] It introduces an attentive meta-learning method for task-agnostic representation and realizes fast adaption in different tasks, thus having the ability of learning shared representation across tasks.…”
Section: Author-studymentioning
confidence: 99%
“…Contribution Basic Language Model/Classifier [91] It proposes a novel co-training algorithm which uses an ensemble of classifiers created in multiple training iterations, with labeled data and unlabeled data trained jointly and with no added computational complexity. Naïve Bayes; Support Vector Machine [92] It uses the knowledge of Wikipedia to extend the training samples, which is realized by network graph construction.…”
Section: Author-studymentioning
confidence: 99%