Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016) 2016
DOI: 10.18653/v1/s16-1082
|View full text |Cite
|
Sign up to set email alerts
|

SemEval-2016 Task 2: Interpretable Semantic Textual Similarity

Abstract: The final goal of Interpretable Semantic Textual Similarity (iSTS) is to build systems that explain which are the differences and commonalities between two sentences. The task adds an explanatory level on top of STS, formalized as an alignment between the chunks in the two input sentences, indicating the relation and similarity score of each alignment. The task provides train and test data on three datasets: news headlines, image captions and student answers. It attracted nine teams, totaling 20 runs. All data… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
105
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 95 publications
(105 citation statements)
references
References 13 publications
0
105
0
Order By: Relevance
“…This positive meaning ranges from implicatures, i.e., what is suggested in an utterance even though neither expressed nor strictly implied (Blackburn, 2008), to entailments. Other terms used in the literature include implied meanings (Mitkov, 2005), implied alternatives (Rooth, 1985) and semantically similars (Agirre et al, 2013). We do not strictly fit into any of this terminology, we reveal positive interpretations as intuitively done by humans when reading text.…”
Section: Terminology Scope and Focusmentioning
confidence: 78%
“…This positive meaning ranges from implicatures, i.e., what is suggested in an utterance even though neither expressed nor strictly implied (Blackburn, 2008), to entailments. Other terms used in the literature include implied meanings (Mitkov, 2005), implied alternatives (Rooth, 1985) and semantically similars (Agirre et al, 2013). We do not strictly fit into any of this terminology, we reveal positive interpretations as intuitively done by humans when reading text.…”
Section: Terminology Scope and Focusmentioning
confidence: 78%
“…We randomly split all dataset files of SemEval-2012-2015(Agirre et al, 2012, 2013, 2015 into ten. We used the preparation of the data from (Baudis et al, 2016).…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…The task has been used as a community evaluation exercise, the *SEM 2013 shared task on Semantic Textual Similarity (Agirre et al, 2013b). The exercise attracted 14 system runs from 6 teams.…”
Section: Discussionmentioning
confidence: 99%
“…The systems described in this article participated in the *SEM 2013 shared task (Agirre et al, 2013b). Our baseline system was used as the overall task baseline against which all runs were compared.…”
Section: Performance In Shared Taskmentioning
confidence: 99%