Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer 2021
DOI: 10.18653/v1/2021.acl-long.115
|View full text |Cite
|
Sign up to set email alerts
|

Towards Table-to-Text Generation with Numerical Reasoning

Abstract: Recent neural text generation models have shown significant improvement in generating descriptive text from structured data such as table formats. One of the remaining important challenges is generating more analytical descriptions that can be inferred from facts in a data source. The use of a template-based generator and a pointer-generator is among the potential alternatives for table-to-text generators. In this paper, we propose a framework consisting of a pre-trained model and a copy mechanism. The pre-tra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(15 citation statements)
references
References 17 publications
0
15
0
Order By: Relevance
“…A natural language format is likely to be more accessible and capable of clearly describing the recommended actionable changes. In domains such as finance and health, controlled text generation, an advancement in numerical reasoning for language models, is critical for accurately conveying cf-XAI where inaccurate suggestions for numerical values or attributes can have significant consequences, such as rejected loan applications and financial harm to the applicant [22]. Despite advancements in large language models like GPT-3, such inaccuracies persist [10] due to hallucinations in generated text.…”
Section: Related Workmentioning
confidence: 99%
“…A natural language format is likely to be more accessible and capable of clearly describing the recommended actionable changes. In domains such as finance and health, controlled text generation, an advancement in numerical reasoning for language models, is critical for accurately conveying cf-XAI where inaccurate suggestions for numerical values or attributes can have significant consequences, such as rejected loan applications and financial harm to the applicant [22]. Despite advancements in large language models like GPT-3, such inaccuracies persist [10] due to hallucinations in generated text.…”
Section: Related Workmentioning
confidence: 99%
“…Another appeal of neural NLG is that texts are generated automatically from the data without needing handcrafted rules. Applications of deep neural NLG approaches include table-to-text [13], [14], [19], [27], table-based question answering [28]- [30], and graph-to-text generation [31], [32].…”
Section: Related Workmentioning
confidence: 99%
“…Large, pre-trained language models are trained on a vast text corpus, giving them a broad generalized understanding of language. Fine-tuning these models for specific tasks has been shown to improve their task-specific understanding, even with limited training data [12]- [14]. Two such language models are T5 [15], and BART [16].…”
Section: Introductionmentioning
confidence: 99%
“…al. [125] view a table T D as a set of cells with their corresponding row and column headers h = [rh : ch] with th for overlapping tokens, numerical value val, and metric-type m. The cells are marked with target flag tgt which is set to 1 for targeted cells and 0 otherwise respective to the content plan. The linearization of the resulting tables is done with templates that consist of concatenation T D = [h : th : val : m : tgt], filtration based on tgt, pre-computed mathematical operations, and their respective combinations.…”
Section: Linearizationmentioning
confidence: 99%
“…al. [125], similarly, follow template-guided generation [219] (see §3.2.2) where the precomputed results of numeric operations are copied over to the template and replace the placeholder tokens. For Pre-trained Language Models (PLMs), the authors incorporate copying into the fine-tuning stage for this action.…”
Section: Hierarchical Encodersmentioning
confidence: 99%