Proceedings of the 28th International Conference on Computational Linguistics 2020
DOI: 10.18653/v1/2020.coling-main.509
|View full text |Cite
|
Sign up to set email alerts
|

Towards automatically generating Questions under Discussion to link information and discourse structure

Abstract: Questions under Discussion (QUD; Roberts, 2012) are emerging as a conceptually fruitful approach to spelling out the connection between the information structure of a sentence and the nature of the discourse in which the sentence can function. To make this approach useful for analyzing authentic data, Riester, Brunetti & De Kuthy ( 2018) presented a discourse annotation framework based on explicit pragmatic principles for determining a QUD for every assertion in a text. De demonstrate that this supports more … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
23
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(23 citation statements)
references
References 37 publications
0
23
0
Order By: Relevance
“…The generated question answer pairs in the QUACC corpus all satisfy the requirement of question answer congruence and as shown in De Kuthy et al (2020), this data set is a good source for training and testing of question generation approaches. De Kuthy et al ( 2020), Kannan et al (2021) trained word, character and subword seq2seq models successfully generating questions that satisfy question answer congruence.…”
Section: Corpus Creationmentioning
confidence: 95%
See 4 more Smart Citations
“…The generated question answer pairs in the QUACC corpus all satisfy the requirement of question answer congruence and as shown in De Kuthy et al (2020), this data set is a good source for training and testing of question generation approaches. De Kuthy et al ( 2020), Kannan et al (2021) trained word, character and subword seq2seq models successfully generating questions that satisfy question answer congruence.…”
Section: Corpus Creationmentioning
confidence: 95%
“…In the following, we will use this cleaned corpus to evaluate a number of models that have been shown to provide good results for the task of QG with QA congruence. Our goal is to show that our clean QUACC data set enables different types of neural models to produce questions of even higher quality compared to the numbers that were presented in De Kuthy et al (2020) and Kannan et al (2021) where various models were only trained and tested on the unclean QUACC.…”
Section: Training a Neural Classifiermentioning
confidence: 98%
See 3 more Smart Citations