Proceedings of the 24th ACM International Conference on Multimedia 2016
DOI: 10.1145/2964284.2984070
|View full text |Cite
|
Sign up to set email alerts
|

Generating Affective Captions using Concept And Syntax Transition Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…In [68][69][70], a VggNet architecture was adopted to extract low-level features; in [71], such a task was accomplished by a Res_152 architecture. Karayil et al [72], conversely, used AlexNet to model ANPs; a multi-directed graph ranked the ANP couples provided by the CNN and generated captions accordingly. For the purpose of improving sentiment description, Sun et al [73] extended a pre-trained caption generation model with an emotion classifier to add abstract knowledge.…”
Section: Sentiment Analysis: Other Applicationsmentioning
confidence: 99%
“…In [68][69][70], a VggNet architecture was adopted to extract low-level features; in [71], such a task was accomplished by a Res_152 architecture. Karayil et al [72], conversely, used AlexNet to model ANPs; a multi-directed graph ranked the ANP couples provided by the CNN and generated captions accordingly. For the purpose of improving sentiment description, Sun et al [73] extended a pre-trained caption generation model with an emotion classifier to add abstract knowledge.…”
Section: Sentiment Analysis: Other Applicationsmentioning
confidence: 99%
“…For instance, Wang et al [37] propose a deeper bidirectional variant of Long Short Term Memory (LSTM) to take both history and future context into account in image captioning. A concept and syntax transition network [19] is presented to deal with large real-world captioning datasets such as YFCC100M [34]. Furthermore, in [31], reinforcement learning is also utilized to train the CNN-RNN based model directly on test metrics of the captioning task, showing significant gains in performance.…”
Section: Progress On Image Captioningmentioning
confidence: 99%