Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence 2020
DOI: 10.24963/ijcai.2020/419
|View full text |Cite
|
Sign up to set email alerts
|

RDF-to-Text Generation with Graph-augmented Structural Neural Encoders

Abstract: The task of RDF-to-text generation is to generate a corresponding descriptive text given a set of RDF triples. Most of the previous approaches either cast this task as a sequence-to-sequence problem or employ graph-based encoder for modeling RDF triples and decode a text sequence. However, none of these methods can explicitly model both local and global structure information between and within the triples. To address these issues, we propose to jointly learn local and global structure information via c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 55 publications
0
9
0
Order By: Relevance
“…The resulting performances confirmed the benefit of the graph-based encoder. Following this conclusion, several papers proposed graph-based encoder as a solution (Gao et al, 2020;Zhao et al, 2020;Moussallem et al, 2020).…”
Section: Related Workmentioning
confidence: 91%
See 1 more Smart Citation
“…The resulting performances confirmed the benefit of the graph-based encoder. Following this conclusion, several papers proposed graph-based encoder as a solution (Gao et al, 2020;Zhao et al, 2020;Moussallem et al, 2020).…”
Section: Related Workmentioning
confidence: 91%
“…IE and NLU, research directions. For instance, Iso et al (2020) and Gao et al (2020) have obtained state-of-the-art performances on the WebNLG RDF-to-text task. On the other hand, Guo et al (2020) considered both NLG and IE objectives within the same learning framework via cycle training.…”
Section: Introductionmentioning
confidence: 99%
“…Data-to-text Generation: Data-to-text generation transforms structured data into descriptive texts (Siddharthan, 2001;Gatt and Krahmer, 2018). Recent works have brought great promising performance to several data-to-text generation tasks, e.g., Puduppully et al (2019a,b); Gong et al (2019a); Wiseman et al (2017) focus on report generation; Chisholm et al (2017); Lebret et al (2016) target at biography generation; ; Gao et al (2020) generate texts from a set of RDF triples considering structural information. Previous works have also designed content selection and text planning models to determine what to say and how to say (Puduppully et al, 2019a;Perez-Beltrachini and Lapata, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…The prevailing sequence-to-sequence (seq2seq) architectures typically address this issue via reranking (Wen et al, 2015a;Juraska et al, 2018) or some sophisticated training techniques (Nie et al, 2019;Kedzie and McKeown, 2019;Qader et al, 2019). For applications where structured inputs are present, neural graph encoders (Marcheggiani and Perez-Beltrachini, 2018;Rao et al, 2019;Gao et al, 2020) or decoding of explicit graph references (Logan et al, 2019) are applied for higher accuracy. Recently, large-scale pretraining has achieved SoTA results on WebNLG by fine-tuning T5 (Kale and Rastogi, 2020b).…”
Section: Related Workmentioning
confidence: 99%