Proceedings of the 11th Knowledge Capture Conference 2021
DOI: 10.1145/3460210.3493586
|View full text |Cite
|
Sign up to set email alerts
|

Discovering Interpretable Topics by Leveraging Common Sense Knowledge

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 9 publications
0
3
0
Order By: Relevance
“…I Harrando et al 2021. [16] Proposed a model which extracts topics from text documents based on the common-sense knowledge available in ConceptNet, combined both ConceptNet's knowledge graph and graph embeddings, specified the importance of modeling choices and criteria for designing the model, and demonstrated that it can be used to label data for a supervised classifier…”
Section: Literature Reviewmentioning
confidence: 99%
“…I Harrando et al 2021. [16] Proposed a model which extracts topics from text documents based on the common-sense knowledge available in ConceptNet, combined both ConceptNet's knowledge graph and graph embeddings, specified the importance of modeling choices and criteria for designing the model, and demonstrated that it can be used to label data for a supervised classifier…”
Section: Literature Reviewmentioning
confidence: 99%
“…a: X-Class, (Wang et al, 2021) a SOTA keyword-based method. b: (Harrando and Troncy, 2021). c: Crowdsourced human performance from Alex et al (2021) (they used a selected portion of Banking77).…”
Section: I2 Additional Implementation Detailsmentioning
confidence: 99%
“…In this scenario, augmenting the labels can be useful: building a context (manually or automatically) that can be used to refine the quality of the final embedding helps the process of the semantic alignment of labels and texts [8,9]. This augmentation can be carried out in various fashions, e.g., scraping Wikipedia descriptions, exploring the WordNet taxonomy, or navigating a CommonSense graph [10] to gather related content, words and meaningful relations that could help maximise the similarity between a context and a given example.…”
Section: Related Workmentioning
confidence: 99%