Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016) 2016
DOI: 10.18653/v1/s16-1090
|View full text |Cite
|
Sign up to set email alerts
|

HHU at SemEval-2016 Task 1: Multiple Approaches to Measuring Semantic Textual Similarity

Abstract: This paper describes the HHU system that participated in Task 2 of SemEval 2017, Multilingual and Cross-lingual Semantic Word Similarity. We introduce our un-supervised embedding learning technique and describe how it was employed and configured to address the problems of monolingual and multilingual word similarity measurement. This paper reports from empirical evaluations on the benchmark provided by the task's organizers.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 14 publications
(11 reference statements)
0
3
0
Order By: Relevance
“…After filtering out, based on the mentioned entities, claims with non-scientific content (i.e., 62.3% of the total claims), we end up with a final set of ˜4𝐾 scientific claims, out of which 79.8% has been determined to be False, and 20.2% has been determined to be True. We relate claims by computing their Semantic Textual Similarity [31] and setting an appropriate threshold (0.9 in our experiments).…”
Section: Enhanced Fact-checking Contextmentioning
confidence: 99%
“…After filtering out, based on the mentioned entities, claims with non-scientific content (i.e., 62.3% of the total claims), we end up with a final set of ˜4𝐾 scientific claims, out of which 79.8% has been determined to be False, and 20.2% has been determined to be True. We relate claims by computing their Semantic Textual Similarity [31] and setting an appropriate threshold (0.9 in our experiments).…”
Section: Enhanced Fact-checking Contextmentioning
confidence: 99%
“…This is a popular task in the International Workshop on Semantic Evaluation (SemEval). Three approaches that are part of many proposed methods over the last few years include: (i) surface-level similarity (e.g., similarity between sets or sequences of words or named entities in the two documents); (ii) context similarity (e.g., similarity between document representations); and (iii) topical similarity [26,38].…”
Section: Indicator Extraction Techniquesmentioning
confidence: 99%
“…Most solutions include an ensemble of modules that employs features coming from different unit sizes and depths. More recent approaches generally include the word embedding-based similarity (Liebeck et al, 2016;Brychcín and Svoboda, 2016) as a component of the final ensemble. The top performing team in 2016 (Rychalska et al, 2016) uses an ensemble of multiple modules including recursive autoencoders with WordNet and monolingual aligner (Sultan et al, 2016).…”
Section: Related Workmentioning
confidence: 99%