Proceedings of the 2018 Conference of the North American Chapter Of the Association for Computational Linguistics: Hu 2018
DOI: 10.18653/v1/n18-2098
|View full text |Cite
|
Sign up to set email alerts
|

A Mixed Hierarchical Attention Based Encoder-Decoder Approach for Standard Table Summarization

Abstract: Structured data summarization involves generation of natural language summaries from structured input data. In this work, we consider summarizing structured data occurring in the form of tables as they are prevalent across a wide variety of domains. We formulate the standard table summarization problem, which deals with tables conforming to a single predefined schema. To this end, we propose a mixed hierarchical attention based encoderdecoder model which is able to leverage the structure in addition to the con… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
13
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
4
1

Relationship

2
8

Authors

Journals

citations
Cited by 24 publications
(13 citation statements)
references
References 10 publications
0
13
0
Order By: Relevance
“…Supervised NLG requires large-scale parallel corpora for training, which is a major impediment in scaling to diverse use-cases. For example, in the context of commercial dialog systems alone, there are several scenarios where a system's answer (which may be coming from a database (Jain et al 2018)) needs to be transformed either for its tone (politeness, excitedness, etc. ), or its level of formality (casual, formal, etc.…”
Section: Introductionmentioning
confidence: 99%
“…Supervised NLG requires large-scale parallel corpora for training, which is a major impediment in scaling to diverse use-cases. For example, in the context of commercial dialog systems alone, there are several scenarios where a system's answer (which may be coming from a database (Jain et al 2018)) needs to be transformed either for its tone (politeness, excitedness, etc. ), or its level of formality (casual, formal, etc.…”
Section: Introductionmentioning
confidence: 99%
“…Such systems include the ones by Lebret, Grangier, and Auli (2016), who use conditional language model with copy mechanism for generation, Liu et al (2017), who propose a dual attention Seq2Seq model, Nema et al (2018), who use gated orthogonalization along with dual attention, and Bao et al (2018), who introduce a flexible copying mechanism that selectively replicates contents from the table in the output sequence. Other systems revolve around popular data sets such as the WEATHERGOV data set (Liang, Jordan, and Klein 2009;Jain et al 2018), ROBOCUP data set (Chen and Mooney 2008), ROTOWIRE and SBNATION (Wiseman, Shieber, and Rush 2017), and the WEBNLG data set (Gardent et al 2017). Recently Bao et al (2018) andNovikova, Dusek, andRieser (2017) have introduced a new data set for table/tuple to text generation, and both supervised and unsupervised systems (Fevry and Phang 2018) have been proposed and evaluated against these data sets.…”
Section: Related Workmentioning
confidence: 99%
“…Trisedya et al (2018) propose a GTR-LSTM model to encode not only the triple information, but also the structure information of the entity graph into hidden semantic space. Jain et al (2018) exploit a mixed hierarchical attention based encoder-decoder model to leverage the structure and content information. Shimorina and Gardent (2018) propose to use delexicalization and copy mechanism to enhance the performance of the sequence-to-sequence framework.…”
Section: Related Workmentioning
confidence: 99%