2015
DOI: 10.48550/arxiv.1509.00838
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

What to talk about and how? Selective Generation using LSTMs with Coarse-to-Fine Alignment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
30
1

Year Published

2017
2017
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 23 publications
(31 citation statements)
references
References 13 publications
0
30
1
Order By: Relevance
“…It takes into account the semantic relationship between records and reference words and experiment results verify the effectiveness of this improvement on both document-and sentence-level datasets. Furthermore, our work is similar but different from datato-text generation studies (Mei, Bansal, and Walter 2015;Nie et al 2018;Puduppully, Dong, and Lapata 2018;Bao et al 2018;Sun et al 2018;Gong et al 2019a;Chen et al 2019;Gong et al 2019b). This series of work focuses on generating more accurate descriptions of given data, rather than studying the writing content of control output.…”
Section: Related Workcontrasting
confidence: 72%
See 2 more Smart Citations
“…It takes into account the semantic relationship between records and reference words and experiment results verify the effectiveness of this improvement on both document-and sentence-level datasets. Furthermore, our work is similar but different from datato-text generation studies (Mei, Bansal, and Walter 2015;Nie et al 2018;Puduppully, Dong, and Lapata 2018;Bao et al 2018;Sun et al 2018;Gong et al 2019a;Chen et al 2019;Gong et al 2019b). This series of work focuses on generating more accurate descriptions of given data, rather than studying the writing content of control output.…”
Section: Related Workcontrasting
confidence: 72%
“…Similarly, the model also need to select the suitable style words according to the input records. One straightforward way to address this problem is to use the relevant algorithms in datato-text generation, such as pre-selector (Mei, Bansal, and Walter 2015) and content selector (Puduppully, Dong, and Lapata 2018). However, these supervised methods cannot be directly transferred considering that we impose an additional goal of preserving the style words, which lacks of parallel data and explicit training objective.…”
Section: Desired Outputmentioning
confidence: 99%
See 1 more Smart Citation
“…End to end learning approaches Several recent deep learning approaches propose to learn a mapping directly from inputs to actions, whether structured observations are provided [22,33] or the agent deals with raw visual observations [25,43]. Cross modal grounding of language instructions to visual observations is often used in several works, via e.g., reinforcement learning [38,37], autoencoder architectures that impose a language instructions-based heat map on the visual observations (using U-net architectures [24], attention mechanisms [49], or implementation of non linear differentiable filters [2]).…”
Section: Related Workmentioning
confidence: 99%
“…Several recent deep learning approaches propose to learn a mapping directly from inputs to actions, whether structured observations are provided [22,33] or the agent deals with raw visual observations [25,43]. Proponents of this approach argue about the simplicity of these models in outperforming a combination of models that require engineering hand crafted representations.…”
Section: End To End Learning Approachesmentioning
confidence: 99%