Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017) 2017
DOI: 10.18653/v1/s17-2031
|View full text |Cite
|
Sign up to set email alerts
|

LIPN-IIMAS at SemEval-2017 Task 1: Subword Embeddings, Attention Recurrent Neural Networks and Cross Word Alignment for Semantic Textual Similarity

Abstract: In this paper we report our attempt to use, on the one hand, state-of-the-art neural approaches that are proposed to measure Semantic Textual Similarity (STS). On the other hand, we propose an unsupervised cross-word alignment approach, which is linguistically motivated. The neural approaches proposed herein are divided into two main stages. The first stage deals with constructing neural word embeddings, the components of sentence embeddings. The second stage deals with constructing a semantic similarity funct… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 6 publications
0
2
0
Order By: Relevance
“…When it comes to reasoning, specificity of performance metrics may not be characteristic of natural language, especially from the point of view of the open vocabulary inherent in natural language and semantic change (probably due to logical inference and/or synonymy). To account for these points of view, we performed hypothesis testing based on Semantic Textual Similarity (STS) in source and target tasks, which measures reasoning quality in the sense of semantic relatedness [34,29]. We compared to the distribution of STS measurements between predicted and shuffled ground truth object phrases (a random baseline that simulates perturbation of the actual correspondence between subject-predicate and object).…”
Section: Methodsmentioning
confidence: 99%
“…When it comes to reasoning, specificity of performance metrics may not be characteristic of natural language, especially from the point of view of the open vocabulary inherent in natural language and semantic change (probably due to logical inference and/or synonymy). To account for these points of view, we performed hypothesis testing based on Semantic Textual Similarity (STS) in source and target tasks, which measures reasoning quality in the sense of semantic relatedness [34,29]. We compared to the distribution of STS measurements between predicted and shuffled ground truth object phrases (a random baseline that simulates perturbation of the actual correspondence between subject-predicate and object).…”
Section: Methodsmentioning
confidence: 99%
“…The main advantage of such an approach is that there exists the possibility of studying the statistical behavior of sentence meaning. As an additional and important benefit, sentence embeddings make it possible to leverage a number of NLP tasks, such as sentence clustering, text summarization (Zhang et al, 2012;Arroyo-Fernández, 2015;Arroyo-Fernández et al, 2016;Yu et al, 2017), sentence classification (Kalchbrenner et al, 2014;Chen et al, 2017;Er et al, 2016), paraphrase identification (Yin and Schütze, 2015), semantic similarity/ relatedness and sentiment classification (Arroyo-Fernández and Meza Ruiz, 2017;Chen et al, 2017;De Boom et al, 2016;Kalchbrenner et al, 2014;Onan et al, 2017;Yazdani and Popescu-Belis, 2013).…”
Section: Introductionmentioning
confidence: 99%