2009
DOI: 10.1007/978-3-642-10543-2_22
|View full text |Cite
|
Sign up to set email alerts
|

Image Annotation Refinement Using Web-Based Keyword Correlation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2010
2010
2013
2013

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 11 publications
0
2
0
Order By: Relevance
“…We can mention here Bayes Point Machine (Chang et al 2003), Support Vector Machine (Cusano et al 2004) and Decision Trees (Kwasnicka and Paradowski 2008) which all estimate the visual features distributions associated with each word. Some authors try to refine the annotation results by reducing the difference between the expected and resulting word count vectors (Kwasnicka and Paradowski 2006), by using Word-Net which contains semantic relations between words (Jin et al 2005) or by word co-occurrence models coupled with fast random walks (Llorente et al 2009), an interesting approach exploiting the recent advances in graph processing.…”
Section: Related Approachesmentioning
confidence: 99%
“…We can mention here Bayes Point Machine (Chang et al 2003), Support Vector Machine (Cusano et al 2004) and Decision Trees (Kwasnicka and Paradowski 2008) which all estimate the visual features distributions associated with each word. Some authors try to refine the annotation results by reducing the difference between the expected and resulting word count vectors (Kwasnicka and Paradowski 2006), by using Word-Net which contains semantic relations between words (Jin et al 2005) or by word co-occurrence models coupled with fast random walks (Llorente et al 2009), an interesting approach exploiting the recent advances in graph processing.…”
Section: Related Approachesmentioning
confidence: 99%
“…In The first category, image annotation is formulated as a supervised classification problem and Different machine learning techniques are used to predict the annotations of new images [3][4][5]. The second category learn the correlation between image features and textual words from the examples of annotated images and then apply the learned correlation to predict words for unseen images [6][7][8][9]. These methods infer the association between the images and their related tags only at the image level, and utilize the image-to-image visual similarity to refine or predict Image tags.…”
Section: Introductionmentioning
confidence: 99%