Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Confere 2015
DOI: 10.3115/v1/p15-2070
|View full text |Cite
|
Sign up to set email alerts
|

PPDB 2.0: Better paraphrase ranking, fine-grained entailment relations, word embeddings, and style classification

Abstract: We present a new release of the Paraphrase Database. PPDB 2.0 includes a discriminatively re-ranked set of paraphrases that achieve a higher correlation with human judgments than PPDB 1.0's heuristic rankings. Each paraphrase pair in the database now also includes finegrained entailment relations, word embedding similarities, and style annotations.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
267
1
2

Year Published

2017
2017
2020
2020

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 255 publications
(270 citation statements)
references
References 15 publications
0
267
1
2
Order By: Relevance
“…This has resulted in word embeddings becoming very popular in natural language processing tasks, e.g. [11,12]. Moreover, there are a number of available word embedding frameworks, such as word2vec [13] and Glove [14], with models that are pre-trained on large corpora from different domains, such as Google News 4 or Wikipedia 5 .…”
Section: Related Workmentioning
confidence: 99%
“…This has resulted in word embeddings becoming very popular in natural language processing tasks, e.g. [11,12]. Moreover, there are a number of available word embedding frameworks, such as word2vec [13] and Glove [14], with models that are pre-trained on large corpora from different domains, such as Google News 4 or Wikipedia 5 .…”
Section: Related Workmentioning
confidence: 99%
“…Thus, we propose a method that inexpensively generates translations using machine translation and Quality Estimation. Ganitkevitch et al (2013) and Pavlick et al (2015) also use a bilingual parallel corpora to build a paraphrase database using bilingual pivoting (Bannard and Callison-Burch, 2005). Their methods differ from ours in that they aim to acquire phrase level paraphrase rules and carry out word alignment instead of machine translation.…”
Section: Related Workmentioning
confidence: 99%
“…In addition to a logistic regression classifier, the authors exploit dependency parse graphs, a paraphrase database (Pavlick et al, 2015) and several other features, to arrive at an accuracy of 73%. Another related approach is described by (Augenstein et al, 2016), who apply stance detection methods on the SemEval 2016 Task 6 data set.…”
Section: Related Workmentioning
confidence: 99%