Proceedings of the 12th International Workshop on Semantic Evaluation 2018
DOI: 10.18653/v1/s18-1156
|View full text |Cite
|
Sign up to set email alerts
|

CitiusNLP at SemEval-2018 Task 10: The Use of Transparent Distributional Models and Salient Contexts to Discriminate Word Attributes

Abstract: This article describes the unsupervised strategy submitted by the CitiusNLP team to SemEval 2018 Task 10, a task which consists of predicting whether a word is a discriminative attribute between two other words. The proposed strategy relies on the correspondence between discriminative attributes and relevant contexts of a word. More precisely, the method uses transparent distributional models to extract salient contexts of words which are identified as discriminative attributes. The system performance reaches … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 14 publications
0
2
0
Order By: Relevance
“…(No expl. ) 0.69 (Attia et al, 2018) Google 5 grams and Word2Vec embeddings as features for feedforward neural network None 0.67 (Zhou et al, 2018) Ensemble ML model with WordNet, PMI scores, Word2Vec, and GloVe embeddings None 0.67 (Kulmizev et al, 2018) A combination of GloVe and Paragram embeddings None 0.67 (Zhang and Carpuat, 2018) SVM with GloVe embeddings None 0.67 (Vinayan et al, 2018) CNN with GloVe embeddings None 0.66 (Grishin, 2018) Similarity calculations using a combination of DSMs None 0.65 Word2Vec, GloVe, and FastText embeddings as features for MLP-CNN None 0.63 (Gamallo, 2018) Dependency parsing and co-occurrence analysis Transp. (No expl.)…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…(No expl. ) 0.69 (Attia et al, 2018) Google 5 grams and Word2Vec embeddings as features for feedforward neural network None 0.67 (Zhou et al, 2018) Ensemble ML model with WordNet, PMI scores, Word2Vec, and GloVe embeddings None 0.67 (Kulmizev et al, 2018) A combination of GloVe and Paragram embeddings None 0.67 (Zhang and Carpuat, 2018) SVM with GloVe embeddings None 0.67 (Vinayan et al, 2018) CNN with GloVe embeddings None 0.66 (Grishin, 2018) Similarity calculations using a combination of DSMs None 0.65 Word2Vec, GloVe, and FastText embeddings as features for MLP-CNN None 0.63 (Gamallo, 2018) Dependency parsing and co-occurrence analysis Transp. (No expl.)…”
Section: Discussionmentioning
confidence: 99%
“…With regard to interpretability and explainability we can classify IDA approaches into three categories. Frequency-based models over textbased features, heavily relying on textual features and frequency-based methods (Gamallo, 2018;González et al, 2018) ; ML over Textual features (Dumitru et al, 2018;Sommerauer et al, 2018;King et al, 2018;Mao et al, 2018) and ML over dense vectors and textual features (Brychcín et al, 2018;Attia et al, 2018;Dumitru et al, 2018;Arroyo-Fernández et al, 2018;Speer and Lowry-Duda, 2018;Santus et al, 2018;Grishin, 2018;Zhou et al, 2018;Vinayan et al, 2018;Kulmizev et al, 2018;Zhang and Carpuat, 2018;Shiue et al, 2018). While the first category concentrates on models with higher interpretability, none of these models provide explanations.…”
Section: Related Workmentioning
confidence: 99%