Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017) 2017
DOI: 10.18653/v1/s17-2016
|View full text |Cite
|
Sign up to set email alerts
|

HCTI at SemEval-2017 Task 1: Use convolutional neural network to evaluate Semantic Textual Similarity

Abstract: This paper describes our convolutional neural network (CNN) system for the Semantic Textual Similarity (STS) task. We calculated semantic similarity score between two sentences by comparing their semantic vectors. We generated a semantic vector by max pooling over every dimension of all word vectors in a sentence. There are two key design tricks used by our system. One is that we trained a CNN to transfer GloVe word vectors to a more proper form for the STS task before pooling. Another is that we trained a ful… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
44
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 51 publications
(44 citation statements)
references
References 10 publications
0
44
0
Order By: Relevance
“…We observed that no word embedding has strong results on all the tasks. Although trained on the paraphrase database and having the highest |V | avai , the SL999 embedding could not outperform the Glove embedding in SICK-R. HCTI (Shao, 2017), which is the current state-of-the-art in the group of neural representation models on STSB, also used the Glove embedding. However, the performance of HTCI in STSB (78.4) is lower than our model using the Glove embedding.…”
Section: Evaluation Of Exploiting Multiple Pre-trained Word Embeddingsmentioning
confidence: 99%
See 1 more Smart Citation
“…We observed that no word embedding has strong results on all the tasks. Although trained on the paraphrase database and having the highest |V | avai , the SL999 embedding could not outperform the Glove embedding in SICK-R. HCTI (Shao, 2017), which is the current state-of-the-art in the group of neural representation models on STSB, also used the Glove embedding. However, the performance of HTCI in STSB (78.4) is lower than our model using the Glove embedding.…”
Section: Evaluation Of Exploiting Multiple Pre-trained Word Embeddingsmentioning
confidence: 99%
“…SICK-R SICK-E MRPC Ensemble models/Feature engineering DT TEAM (Maharjan et al, 2017) 79.2 ---ECNU (Tian et al, 2017) 81 ---BIT 80.9 ---TF-KLD (Ji and Eisenstein, 2013) ---80.41/85.96 Neural representation models with one embedding Multi-Perspective CNN (He et al, 2015) -86.86 -78.6/84.73 InferSent (Conneau et al, 2017) 75.8 88.4 86.1 76.2/83.1 GRAN (Wieting and Gimpel, 2017) 76.4 86 --Paragram-Phrase (Wieting et al, 2016b) 73.2 86.84 85.3 -HCTI (Shao, 2017) 78.4 ---Neural representation models with the five embeddings using sentence-sentence comparison ( We report the results of these methods in Table 2. Overall, our M-MaxLSTM-CNN shows competitive performances in these tasks.…”
Section: Stsbmentioning
confidence: 99%
“…The techniques used are diverse and the results obtained are encouraging. Some apply neural network algorithms such as attention mechanisms [17] or convolutional networks [30]. Others compute variables from more traditional semantic and syntactic analysis tools such as alignment measures to feed supervised learning models [24].…”
Section: Semantic Similaritymentioning
confidence: 99%
“…Similar to Focus layer, DPAD is one of the options for prediction layer at which we decide the similarity score and has been widely used in the literature [12,31]. To predict the similarity between a pair of questions, first, we extract each question's thought vector using one of the sentence embedding methods.…”
Section: Dot Product and Absolute Distance (Dpad)mentioning
confidence: 99%