Proceedings of the 11th International Conference on Natural Language Generation 2018
DOI: 10.18653/v1/w18-6536
|View full text |Cite
|
Sign up to set email alerts
|

Neural Generation of Diverse Questions using Answer Focus, Contextual and Linguistic Features

Abstract: Question Generation is the task of automatically creating questions from textual input. In this work we present a new Attentional Encoder-Decoder Recurrent Neural Network model for automatic question generation. Our model incorporates linguistic features and an additional sentence embedding to capture meaning at both sentence and word levels. The linguistic features are designed to capture information related to named entity recognition, word case, and entity coreference resolution. In addition our model uses … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
21
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(21 citation statements)
references
References 27 publications
0
21
0
Order By: Relevance
“…7 Creating ground truth paraphrases: We randomly sampled a subset of about 1600 (question, entity) pairs collected from Steps 1 and 2 for obtaining human-generated question variations. We set up our task through the crowdsourcing platform Amazon Mechanical Turk (AMT) following similar dataset collection efforts (Rajpurkar et al, 2016;Harrison and Walker, 2018). Each question, along with the entity descriptions was examined by three crowdworkers.…”
Section: Firs Dataset Creationmentioning
confidence: 99%
“…7 Creating ground truth paraphrases: We randomly sampled a subset of about 1600 (question, entity) pairs collected from Steps 1 and 2 for obtaining human-generated question variations. We set up our task through the crowdsourcing platform Amazon Mechanical Turk (AMT) following similar dataset collection efforts (Rajpurkar et al, 2016;Harrison and Walker, 2018). Each question, along with the entity descriptions was examined by three crowdworkers.…”
Section: Firs Dataset Creationmentioning
confidence: 99%
“…Seq2Seq models using long short-term memory (LSTM) with global attention are widely used for QG [8,15,18,20,40,42,43]. In decoding, the hidden states of the decoder are used to generate attention weights for the encoded representation of the input.…”
Section: Related Workmentioning
confidence: 99%
“…Moreover, they proposed selecting question words from a restricted vocabulary. In other models, more input features are added, such as part of speech, named entities, word case, coreference, and dependency [15,20,42,43]. These features are concatenated with the token embedding vector and answer-position signals, and are subsequently fed into an LSTM encoder.…”
Section: Related Workmentioning
confidence: 99%
“…Starting from early rule-based approaches that relied on syntactic transformations or handcrafted semantic templates (Heilman and Smith, 2010;Lindberg et al, 2013;Mazidi and Nielsen, 2014), automatic question generation from text has gradually transitioned to neural sequence-to-sequence generation methods (Du et al, 2017;Duan et al, 2017;Harrison and Walker, 2018;Zhu et al, 2019). Most state-of-the-art generators also benefit from largescale language model pre-training Scialom et al, 2019;.…”
Section: Related Workmentioning
confidence: 99%