2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops 2013
DOI: 10.1109/cvprw.2013.50
|View full text |Cite
|
Sign up to set email alerts
|

Generating Image Descriptions Using Semantic Similarities in the Output Space

Abstract: Automatically generating meaningful descriptions for images has recently emerged as an important area of research. In this direction, a nearest-neighbour based generative phrase prediction model (PPM) proposed by (Gupta et al. 2012) was shown to achieve state-of-the-art results on PASCAL sentence dataset, thanks to the simultaneous use of three different sources of information (i.e. visual clues, corpus statistics and available descriptions). However, they do not utilize semantic similarities among the phrases… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(4 citation statements)
references
References 15 publications
0
4
0
Order By: Relevance
“…These descriptions are usually in the form of simple captions containing few tens of words. Among these, there are two popular practices: either to generate a description given an image [6,16,17], or to retrieve one from a collection of available descriptions [11,14,18]. In the first setting, a new description is generated by combining visual clues using natural language generation (NLG) techniques.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…These descriptions are usually in the form of simple captions containing few tens of words. Among these, there are two popular practices: either to generate a description given an image [6,16,17], or to retrieve one from a collection of available descriptions [11,14,18]. In the first setting, a new description is generated by combining visual clues using natural language generation (NLG) techniques.…”
Section: Related Workmentioning
confidence: 99%
“…This in turn can be helpful in analyzing the interplay among the individual components of the query. Conceptually, our work closely relates with the image description generation methods [6,17], and demonstrates their application to the image retrieval task given descriptive textual queries.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The search-based method involves using similarity algorithms to compute the similarity between extracted features and the images stored in a constructed image library, to find out the images in line with the algorithm, and these images have been matched with the corresponding sentence descriptions in advance, which can be fine-tuned for appropriate output. Verma et al (2013) adopt traditional image feature extraction methods to compare the extracted image features with those in the database, so as to determine the maximum joint probability output in the description tuple. Li and Jin (2016) introduce the reordering mechanism which greatly improves the model performance.…”
Section: Introductionmentioning
confidence: 99%