Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1600
|View full text |Cite
|
Sign up to set email alerts
|

Towards Comprehensive Description Generation from Factual Attribute-value Tables

Abstract: The comprehensive descriptions for factual attribute-value tables, which should be accurate, informative and loyal, can be very helpful for end users to understand the structured data in this form. However previous neural generators might suffer from key attributes missing, less informative and groundless information problems, which impede the generation of high-quality comprehensive descriptions for tables. To relieve these problems, we first propose force attention (FA) method to encourage the generator to p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
26
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 21 publications
(26 citation statements)
references
References 41 publications
0
26
0
Order By: Relevance
“…Human Evaluation Human ratings on the generated descriptions provide more reliable reflection of the model performance. Following Liu et al (2019b), we conduct comprehensive human evaluation between our model and the baselines. The annotators are asked to evaluate from three perspectives: fluency, coverage (how much table content is recovered) and correctness (how much generated content is faithful to the source table).…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…Human Evaluation Human ratings on the generated descriptions provide more reliable reflection of the model performance. Following Liu et al (2019b), we conduct comprehensive human evaluation between our model and the baselines. The annotators are asked to evaluate from three perspectives: fluency, coverage (how much table content is recovered) and correctness (how much generated content is faithful to the source table).…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…Other related works are (Perez-Beltrachini and Lapata, 2018;Liu et al, 2019b). For (Perez-Beltrachini and Lapata, 2018), the content selection mechanism training with multi-task learning and reinforcement learning is proposed.…”
Section: Related Workmentioning
confidence: 99%
“…For (Perez-Beltrachini and Lapata, 2018), the content selection mechanism training with multi-task learning and reinforcement learning is proposed. For (Liu et al, 2019b), they propose force attention and reinforcement learning based method. Their learning methods are completely different from our method that simultaneously incorporates optimaltransport matching loss and embedding similarity loss.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Among the baselines shown in Table A4, SQLova is the one that's strictly comparable to BRIDGE as both use BERT-large-uncased. Hydra-Net uses RoBERTa-Large (Liu et al, 2019a) and X-SQL uses MT-DNN (Liu et al, 2019b).…”
Section: A5 Wikisql Experimentsmentioning
confidence: 99%