2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA) 2018
DOI: 10.1109/iccubea.2018.8697360
|View full text |Cite
|
Sign up to set email alerts
|

Image Caption Generation Using Deep Learning Technique

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 56 publications
(17 citation statements)
references
References 11 publications
0
11
0
Order By: Relevance
“…In this paper, Mainly three categories of features i.e geometric, conceptual, and visual are used for content generation.Also, variety of methods have been proposed for image caption generation in the past. They may be classified in broadly three categories i.e., Template-based methods [14,30,51,37], Retrieval-based methods [40,43,16,46,20], and Deep neural network based (Encoder-decoder) methods [49,3,15,12]. These models are often built using CNN to encode the image & extract visual information whereas RNN is used to decode the visual information into a sentence.…”
Section: Related Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…In this paper, Mainly three categories of features i.e geometric, conceptual, and visual are used for content generation.Also, variety of methods have been proposed for image caption generation in the past. They may be classified in broadly three categories i.e., Template-based methods [14,30,51,37], Retrieval-based methods [40,43,16,46,20], and Deep neural network based (Encoder-decoder) methods [49,3,15,12]. These models are often built using CNN to encode the image & extract visual information whereas RNN is used to decode the visual information into a sentence.…”
Section: Related Literature Reviewmentioning
confidence: 99%
“…Identification of proper CNN and RNN models is a challenging issue. The summary of our research contributions is as follows: In regard to contribution 2, there are several researches in literature that have used only some specific metrics out of all the available choices [34,49,33,24,3,15,10,25]. This could potentially lead to unfair evaluation of the results.…”
Section: Introductionmentioning
confidence: 99%
“…Chetan Amritkar et al [6] presented the method where the image caption i.e. The content of the image can be generated using CNN and RNN.…”
Section: Literature Surveymentioning
confidence: 99%
“…The actual structure of LSTM is understood. The recent improvements in training deep neural networks have tremendously encouraged the huge development in captioning of images [15] and created a bound by joining the area of machine creativity and artificial intelligence.…”
Section: Outcomes From Papermentioning
confidence: 99%