Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2022
DOI: 10.18653/v1/2022.acl-long.277
|View full text |Cite
|
Sign up to set email alerts
|

Chart-to-Text: A Large-Scale Benchmark for Chart Summarization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
42
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 36 publications
(42 citation statements)
references
References 0 publications
0
42
0
Order By: Relevance
“…Finally, participants completed the demographic section. Here, they reported their age range (e.g., [18][19][20][21][22][23][24], their current education level (e.g., "Less than high school"), and their experience with charts and reading. This experience was captured through both the frequency (e.g., "Every day") as well as the context in which they engaged with the material (e.g., Government reports).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Finally, participants completed the demographic section. Here, they reported their age range (e.g., [18][19][20][21][22][23][24], their current education level (e.g., "Less than high school"), and their experience with charts and reading. This experience was captured through both the frequency (e.g., "Every day") as well as the context in which they engaged with the material (e.g., Government reports).…”
Section: Methodsmentioning
confidence: 99%
“…Besides serving as input or output for visual analysis, systems have begun to incorporate natural language-generated text along with their visualization responses to help describe key insights to the user. For example, a variety of research systems [6,21,40,43,46] and tools such as Tableau's Summary Card [51], and Power BI [38] produce natural language captions that summarize statistics and trends depicted by the chart.…”
Section: Visualization + Text For Analysis and Storytellingmentioning
confidence: 99%
“…To avoid too simple and repetitive structure, we analyzed three different corpora of summary sentences for charts and curated several templates for each message type with varying syntactic structures. To find various variations of templates, we analyzed the dataset shared by Kanthara et al in their extended work on Chart-to-Text [45]. We randomly pick a different template every time a particular category of message appears to enhance naturalness and diversity in lexicons.…”
Section: Template-based (Moderate Length)mentioning
confidence: 99%
“…Ultimately, two model variants are developed: ChartVLM-Base-7.3B (0.3B chart image encoder & base decoder + 7B auxiliary decoder) and ChartVLM-Large-14.3B (1.3B chart image encoder & base decoder + 13B auxiliary decoder). All the data we used during fine-tuning stage comes from ChartQA [23], PlotQA [25], Chart2Text [14], and SimChart9K [34]. Besides, the ChartVLM is trained using 32 × NVIDIA Tesla A100.…”
Section: Cascaded Decoders Designmentioning
confidence: 99%