2022
DOI: 10.1016/j.jksuci.2020.04.001
|View full text |Cite
|
Sign up to set email alerts
|

The survey: Text generation models in deep learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
74
0
3

Year Published

2022
2022
2023
2023

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 119 publications
(77 citation statements)
references
References 29 publications
0
74
0
3
Order By: Relevance
“…AI can also be used to rewrite one article to suit several different channels or audience tastes. 20 A survey of recent deep learning methods for text generation by Iqbal and Qureshi (2020) concludes that text generated from images could be most amenable to GAN processing while topicto-text translation is likely to be dominated by variational autoencoders (VAE).…”
Section: Journalism and Text Generationmentioning
confidence: 99%
“…AI can also be used to rewrite one article to suit several different channels or audience tastes. 20 A survey of recent deep learning methods for text generation by Iqbal and Qureshi (2020) concludes that text generated from images could be most amenable to GAN processing while topicto-text translation is likely to be dominated by variational autoencoders (VAE).…”
Section: Journalism and Text Generationmentioning
confidence: 99%
“…It receives the image data components from the miner and generates the feature by using YOLO [22][23] algorithm. The general principles of image caption generator based on [24] it consist of two neural networks: YOLO based CNN , for feature extraction and LSTM [25] for generating the text sequence, which is similar to [15] model but instead of RNN [26], the LSTM was used because it carries relevant data during the training process and excludes non relevant information by forget gate. Figure 3 shows the Merge Architecture for Encoder-Decoder Model from [15].…”
Section: Image Classifiermentioning
confidence: 99%
“…However, it is notable that sentence-level OLHCCR has continuous strokes input of the target word at each time step of predicting besides the outputs of previous steps compared to text generation task [11]. Inspired by the above observation, we incorporate the prompting glyph information of the target word into each layer of the aforementioned autoregressive framework by adding the extra multi-modal fusion sub-layer, and the other sub-layers are identical as the structure of pre-trained autoregressive framework.…”
Section: Multi-layer Fusion Modulementioning
confidence: 99%