2018
DOI: 10.1007/978-3-319-73618-1_7
|View full text |Cite
|
Sign up to set email alerts
|

Large-Scale Simple Question Generation by Template-Based Seq2seq Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
2
2

Relationship

3
5

Authors

Journals

citations
Cited by 17 publications
(8 citation statements)
references
References 12 publications
0
8
0
Order By: Relevance
“…Additionally, due to the rareness of the word 'kiev', our model is able to cover the related information. Similarly, the generated description for WB-filter covers the information from 'Organization' and ' Birthplace' with the help of pro-seq2seq seq2seq+Force-attention struct-aware [Liu et al 2017 Fig 6(right) shows the effects of λ in Eq 6.…”
Section: Analysis Of Experimental Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Additionally, due to the rareness of the word 'kiev', our model is able to cover the related information. Similarly, the generated description for WB-filter covers the information from 'Organization' and ' Birthplace' with the help of pro-seq2seq seq2seq+Force-attention struct-aware [Liu et al 2017 Fig 6(right) shows the effects of λ in Eq 6.…”
Section: Analysis Of Experimental Resultsmentioning
confidence: 99%
“…Androutsopoulos et al (2013) and Duma and Klein (2013) focused on generating descriptive language for Ontologies and RDF triples. Most recent work utilize neural networks on data-to-text generation (Mahapatra et al, 2016;Wiseman et al, 2017;Kaffee et al, 2018;Freitag and Roy, 2018;Qader et al, 2018;Dou et al, 2018;Yeh et al, 2018;Jhamtani et al, 2018;Liu et al, 2017bLiu et al, , 2019Peng et al, 2019;Dušek et al, 2019). Some closely relevant work also focused on the table-to-text generation.…”
Section: Related Workmentioning
confidence: 99%
“…To reduce hallucinations in the reference-based setting, researchers have applied iterative training (Nie et al, 2019), post editing (Dong et al, 2020), soft constraints, e.g. attention manipulation (Kiddon et al, 2016;Hua and Wang, 2019;Tian et al, 2019; or optimal transport (Wang et al, 2020b), and template/scaffold guided schema (Liu et al, 2017;Wiseman et al, 2018;Moryossef et al, 2019;Ye et al, 2020;Shen et al, 2020;Li and Rush, 2020;Balakrishnan et al, 2019;Liu et al, 2021).…”
Section: Related Workmentioning
confidence: 99%
“…(Liang, Jordan, and Klein 2009;Angeli, Liang, and Klein 2010) extend the work of Barzilay and Lapata to soccer and weather domains by learning the alignment between data and text using hidden variable models. Most recent work treated natural language generation in an end-to-end fashion (Mei, Bansal, and Walter 2016;Lebret, Grangier, and Auli 2016;Wiseman, Shieber, and Rush 2017;Xu et al 2018;Lin et al 2018;Luo et al 2018;Liu et al 2017b;Wang et al 2017) with the help of attention mechanism (Bahdanau, Cho, and Bengio 2014;Luong, Pham, and Manning 2015;Luo et al 2018;Wu et al 2018).…”
Section: Related Workmentioning
confidence: 99%