2010 IEEE International Conference on Multimedia and Expo 2010
DOI: 10.1109/icme.2010.5582604
|View full text |Cite
|
Sign up to set email alerts
|

Semantically similar visual words discovery to facilitate visual invariance

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2012
2012
2015
2015

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 10 publications
0
4
0
Order By: Relevance
“…In most recent techniques, K-means [Sheng et al 2010;Kandasamy and Rodrigo 2010;Hotta 2009;Chimlek et al 2010] is preferred for clustering data because it is unsupervised and easily implemented. However, K-means has obvious sensitivity in initialization and local searching abilities.…”
Section: Bag Of Words With Multithresholdingmentioning
confidence: 99%
See 1 more Smart Citation
“…In most recent techniques, K-means [Sheng et al 2010;Kandasamy and Rodrigo 2010;Hotta 2009;Chimlek et al 2010] is preferred for clustering data because it is unsupervised and easily implemented. However, K-means has obvious sensitivity in initialization and local searching abilities.…”
Section: Bag Of Words With Multithresholdingmentioning
confidence: 99%
“…Feature selection techniques have been widely used in 2D image classification problems [Chimlek et al 2010;Kerroum et al 2009]. In our case, a plethora of features has to be integrated in order to deal with both textured and nontextured databases.…”
Section: Feature Selection and Sherd Classificationmentioning
confidence: 99%
“…In [5] the author uses Olympic game ontology by fusing both textual and visual information but for ontology the author uses only the anthropological structure of the Olympic game event, where in our work we also interpolate the image feature in those Ontology creations.…”
Section: Related Workmentioning
confidence: 99%
“…Subsequently, histograms of the estimated words, that are computed using the constructed vocabulary of visual words and the original descriptors, are used for representing the image content. Typical techniques of this category employ the K-means algorithm for clustering (Chimlek et al 2010;Hotta 2009;Kandasamy and Rodrigo 2010;Sheng et al 2010), mainly due to its ease of implementation. However, K-means has increased sensitivity to its initialisation and local search strategy.…”
Section: Visual Features Extraction Approachmentioning
confidence: 99%