Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016) 2016
DOI: 10.18653/v1/s16-1144
|View full text |Cite
|
Sign up to set email alerts
|

ICL-HD at SemEval-2016 Task 10: Improving the Detection of Minimal Semantic Units and their Meanings with an Ontology and Word Embeddings

Abstract: This paper presents our system submitted for SemEval 2016 Task 10: Detecting Minimal Semantic Units and their Meanings (DiM-SUM;Schneider, Hovy, et al., 2016). We extend AMALGrAM (Schneider and Smith, 2015) by tapping two additional information sources. The first information source uses a semantic knowledge base (YAGO3; Suchanek et al., 2007) to improve supersense tagging (SST) for named entities. The second information source employs word embeddings (GloVe; Pennington et al., 2014) to capture fine-grained lat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
14
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(15 citation statements)
references
References 19 publications
1
14
0
Order By: Relevance
“…The average F-score of the models on the five fold cross validation set, and their F-score on the test set, along with their generalization, is shown in Table 2. All models except for that of Kirilin et al (2016) -which was already optimized for this task by its authors -were run on the validation set to tune their parameters. To evaluate the performance of the models on the test set, the models were trained on the entire training set (which includes the validation splits) and then tested on the test set.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…The average F-score of the models on the five fold cross validation set, and their F-score on the test set, along with their generalization, is shown in Table 2. All models except for that of Kirilin et al (2016) -which was already optimized for this task by its authors -were run on the validation set to tune their parameters. To evaluate the performance of the models on the test set, the models were trained on the entire training set (which includes the validation splits) and then tested on the test set.…”
Section: Resultsmentioning
confidence: 99%
“…The recent SemEval shared task on Detecting Minimal Semantic Units and their Meanings (DiMSUM) focused on MWE identification along with supersense tagging (Schneider et al, 2016). The best performing system for MWE identification for this shared task was that of Kirilin et al (2016) which took into consideration all of the basic features used by Schneider et al (2014a) and two novel feature sets. The first one is based on the YAGO ontology (Suchanek et al, 2007), where heuristics were applied to extract potential named entities from the ontology.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…From the ICL-HD team (Kirilin et al, 2016), S214 uses the AMALGrAM sequence tagger (Schneider and Smith, 2015) with an augmented feature set that leverages word embeddings and a knowledge base. The word embedding features, the knowledge base-derived features, and their union all improve over the condition with no new features, with respect to both MWE performance and supersense performance.…”
Section: Synopsis Of Approachesmentioning
confidence: 99%