Proceedings of the 2019 Conference of the North 2019
DOI: 10.18653/v1/n19-1239
|View full text |Cite
|
Sign up to set email alerts
|

Open Information Extraction from Question-Answer Pairs

Abstract: Open Information Extraction (OPENIE) extracts meaningful structured tuples from freeform text. Most previous work on OPENIE considers extracting data from one sentence at a time. We describe NEURON, a system for extracting tuples from question-answer pairs. Since real questions and answers often contain precisely the information that users care about, such information is particularly desirable to extend a knowledge base with.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0
2

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 18 publications
(17 citation statements)
references
References 29 publications
0
15
0
2
Order By: Relevance
“…(Fader et al, 2011;Mausam et al, 2012;Del Corro and Gemulla, 2013), most recent open IE research has focused on deep-neural-network-based supervised learning models. Such systems are typically based on bidirectional long short-term memory (BiLSTM) and are formulated for two categories: sequence labeling (Stanovsky et al, 2018;Sarhan and Spruit, 2019;Jia and Xiang, 2019) and sequence generation (Cui et al, 2018;Sun et al, 2018;Bhutani et al, 2019). The latter enables flexible extraction; however, it is more computationally expensive than the former.…”
Section: Introductionmentioning
confidence: 99%
“…(Fader et al, 2011;Mausam et al, 2012;Del Corro and Gemulla, 2013), most recent open IE research has focused on deep-neural-network-based supervised learning models. Such systems are typically based on bidirectional long short-term memory (BiLSTM) and are formulated for two categories: sequence labeling (Stanovsky et al, 2018;Sarhan and Spruit, 2019;Jia and Xiang, 2019) and sequence generation (Cui et al, 2018;Sun et al, 2018;Bhutani et al, 2019). The latter enables flexible extraction; however, it is more computationally expensive than the former.…”
Section: Introductionmentioning
confidence: 99%
“…To the best of our knowledge, there is no typed dependencies (grammatical relations) analysis available for Arabic from an architectural perspective; Stanford CoreNLP does not attempt to do everything. It is nothing more than a straightforward pipeline architecture [21] [22].…”
Section: Dependency Annotation Schemementioning
confidence: 99%
“…Dalam pembuatan penelitian ini dgunakan beberapa tinjauan pustaka untuk membantu dalam proses penelitian [6], [7]. Tinjauan pustaka yang digunakan tidak sama persis dengan penelitian ini, tetapi memiliki prinsip pendekatan yang mirip.…”
Section: Rule-based Atau Statisticalunclassified