2016 IEEE 16th International Conference on Data Mining Workshops (ICDMW) 2016
DOI: 10.1109/icdmw.2016.0181
|View full text |Cite
|
Sign up to set email alerts
|

Centrality-Based Approach for Supervised Term Weighting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0
1

Year Published

2018
2018
2020
2020

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 21 publications
0
2
0
1
Order By: Relevance
“…To further investigate the effectiveness of our approach, we have compared our results with current state-of-the-art graph-based and non graphbased methods. In Table 3 we compare against CNN for text classification ,without pre-trained word vectors (Kim, 2014), FastText (Joulin et al, 2017), TextRank (Mihalcea and Tarau, 2004), Word Attraction weights based on word2vec similarities (Wang et al, 2015) and Supervised Term Weighting (TW-CRC) by Shanavas et al (2016). Our work produces comparable to state-of-the-art results.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…To further investigate the effectiveness of our approach, we have compared our results with current state-of-the-art graph-based and non graphbased methods. In Table 3 we compare against CNN for text classification ,without pre-trained word vectors (Kim, 2014), FastText (Joulin et al, 2017), TextRank (Mihalcea and Tarau, 2004), Word Attraction weights based on word2vec similarities (Wang et al, 2015) and Supervised Term Weighting (TW-CRC) by Shanavas et al (2016). Our work produces comparable to state-of-the-art results.…”
Section: Resultsmentioning
confidence: 99%
“…Nevertheless, as we have noticed from the experimental evaluation, even using simple and easy-to-compute local criteria (e.g., degree), we achieve good classification performance. Shanavas et al (2016) introduced supervised term weighting (TW-CRC) as a method to integrate class information with graphs. Similarly, we create a graph for each class (label), where we add all words of documents belonging to the respective class as nodes and their co-occurrence as edges.…”
Section: Inverse Collection Weight (Icw)mentioning
confidence: 99%
“…Selain memetakan relasi antar teks seperti kata, frasa, atau kalimat, model graf dapat menyimpan informasi kemunculan bersama antar kata [15]. Terdapat peningkatan akurasi dengan penggunaan model graf sebagai representasi teks dalam permasalahan klasifikasi dokumen [16]. Representasi teks dengan model graf tidak memerlukan pengetahuan spesifik tentang linguistik atau domain tertentu, tetapi tetap memungkinkan adanya integrasi pengetahuan eksternal seperti Wordnet [17].…”
Section: Issn 2301 -4156unclassified