2005
DOI: 10.1007/978-3-540-31849-1_85
|View full text |Cite
|
Sign up to set email alerts
|

Indexing Text and Visual Features for WWW Images

Abstract: In this paper, we present a novel indexing technique called Multi-scale Similarity Indexing (MSI) to index image's multi-features into a single one-dimensional structure. Both for text and visual feature spaces, the similarity between a point and a local partition's center in individual space is used as the indexing key, where similarity values in different features are distinguished by different scale. Then a single indexing tree can be built on these keys. Based on the property that relevant images haves sim… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2006
2006
2011
2011

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 21 publications
0
5
0
Order By: Relevance
“…Once the high-dimensional data is mapped into lowerdimensional space, conventional indexing schemes can then be applied [20,28,29].…”
Section: Introductionmentioning
confidence: 99%
“…Once the high-dimensional data is mapped into lowerdimensional space, conventional indexing schemes can then be applied [20,28,29].…”
Section: Introductionmentioning
confidence: 99%
“…Indexing methods to search for images and videos with both text and content information were proposed in Smith and Chang (1996) and Smith et al (2001) but they do not support simultaneous querying using hybrid query feature vectors. In Shen et al (2005Shen et al ( , 2006, a multi-scale similarity indexing method was proposed for indexing both text and content features of images. It tries to reduce all features into a one dimensional key space and uses standard B+ tree to index the keys.…”
Section: Related Workmentioning
confidence: 99%
“…As wavelet transformation is a powerful tool in effectively generating compact representation of visual features of images [10], we adopt the Daubechies' wavelets to extract the wavelet coefficients using the LUV color space for its good perception correlation properties [15,7]. The feature elements chosen from the wavelet coefficients are independent of image resolution and scaling.…”
Section: The Atomic Semantic Domainsmentioning
confidence: 99%
“…Recent research efforts in CBIR focuses on bridging the so-called "semantic gap" which combines Relevance Feedback techniques or text and keyword predicates to get powerful retrieval methods for image collections [3,9,10]. Users intend to specify the semantic class of the scene or the objects it should contain when they search for images.…”
Section: Introductionmentioning
confidence: 99%