Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015) 2015
DOI: 10.18653/v1/s15-2045
|View full text |Cite
|
Sign up to set email alerts
|

SemEval-2015 Task 2: Semantic Textual Similarity, English, Spanish and Pilot on Interpretability

Abstract: In semantic textual similarity (STS), systems rate the degree of semantic equivalence between two text snippets. This year, the participants were challenged with new datasets in English and Spanish. The annotations for both subtasks leveraged crowdsourcing. The English subtask attracted 29 teams with 74 system runs, and the Spanish subtask engaged 7 teams participating with 16 system runs. In addition, this year we ran a pilot task on interpretable STS, where the systems needed to add an explanatory layer, tha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
306
0
8

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 382 publications
(315 citation statements)
references
References 7 publications
1
306
0
8
Order By: Relevance
“…All systems were trained using the training/evaluation data from previous years' tasks (Agirre et al, 2012;Agirre et al, 2013;Agirre et al, 2014;Agirrea et al, 2015). After filtering for duplicate examples, our training set contains a total of 13,061 examples.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…All systems were trained using the training/evaluation data from previous years' tasks (Agirre et al, 2012;Agirre et al, 2013;Agirre et al, 2014;Agirrea et al, 2015). After filtering for duplicate examples, our training set contains a total of 13,061 examples.…”
Section: Resultsmentioning
confidence: 99%
“…The STS task series (Agirre et al, 2012;Agirre et al, 2013;Agirre et al, 2014;Agirrea et al, 2015) has aggregated a sizable dataset of sentence pairs annotated with numeric similarity scores. The presence of this dataset allows for a shift from earlier work that mostly used unsupervised learning (Corley and Mihalcea, 2005;Mihalcea et al, 2006;Li et al, 2006), to the supervised approaches that leverage the labeled data (Sultan et al, 2015;Han et al, 2015;Hänig et al, 2015).…”
Section: Introductionmentioning
confidence: 99%
“…Measuring the Semantic Textual Similarity (STS) is to quantify the semantic equivalence between given pair of texts (Banjade et al, 2015;Agirre et al, 2015). For example, a similarity score of 0 means that the texts are not similar at all while a score of 5 means that they have same meaning.…”
Section: Introductionmentioning
confidence: 99%
“…Semantic Textual Similarity (STS) has been held in SemEval since 2012 (Agirre et al, 2012;Agirre et al, 2013;Agirre et al, 2014;Agirre et al, 2015;Agirre et al, 2016), which is a basic task in natural language processing (NLP) field. It aims at computing the semantic similarity of two short texts or sentences, and the result will be evaluated on a gold standard set, which is made by several official annotators (Cer et al, 2017).…”
Section: Introductionmentioning
confidence: 99%