2016
DOI: 10.15439/2016f419
|View full text |Cite
|
Sign up to set email alerts
|

Word2vec Based System for Recognizing Partial Textual Entailment

Abstract: Abstract-Recognizing textual entailment is typically considered as a binary decision task -whether a text T entails a hypothesis H. Thus, in case of a negative answer, it is not possible to express that H is "almost entailed" by T . Partial textual entailment provides one possible approach to this issue.This paper presents an attempt to use word2vec model for recognizing partial (faceted) textual entailment. The proposed approach does not rely on language dependent NLP tools and other linguistic resources, the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 5 publications
0
3
0
Order By: Relevance
“…Note that we assume that if the entailment T → H does not hold, then there is at least one facet such that faceted entailment according to T , H does not hold. Although the proposed method includes a certain part of manual work, in case of preparing balanced corpus, half of the work (positive instances) is done automatically and, moreover, the negative instances can be recommended from the list of potential candidates (obtained in the third step) by some simple algorithm like [19].…”
Section: A Description Of the Methodsmentioning
confidence: 99%
“…Note that we assume that if the entailment T → H does not hold, then there is at least one facet such that faceted entailment according to T , H does not hold. Although the proposed method includes a certain part of manual work, in case of preparing balanced corpus, half of the work (positive instances) is done automatically and, moreover, the negative instances can be recommended from the list of potential candidates (obtained in the third step) by some simple algorithm like [19].…”
Section: A Description Of the Methodsmentioning
confidence: 99%
“…Word embedding is a continuous vector representation of words that encodes the meaning of the word, such that the words that are closer in the vector space are supposed to be similar in the meaning. The use of word embeddings as additional features improves the performance in many NLP tasks, including text classification [22][23][24][25][26][27][28][29][30]. Different Machine Learning algorithms can be trained to derive these vectors, such as Word2Vec [31], FastText [32], Glove [33].…”
Section: Literature Reviewmentioning
confidence: 99%
“…For example, Spanish [6], Arabic [7,8], German [9], and Czech [10], Italian [14], Japanese [15], China [16]. Moreover, some researchers build systems are independent from standard dataset, although the experimental data still refers to the standard dataset [17,18]. These all works indicate that the research in TE field still grows [2].…”
Section: Related Workmentioning
confidence: 99%