2008 International Workshop on Content-Based Multimedia Indexing 2008
DOI: 10.1109/cbmi.2008.4564970
|View full text |Cite
|
Sign up to set email alerts
|

Fast indexing method for image retrieval using tree-structured lattices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
4
4
1

Relationship

4
5

Authors

Journals

citations
Cited by 27 publications
(12 citation statements)
references
References 9 publications
0
12
0
Order By: Relevance
“…In other side, Philippe and Matthieu [50] introduced in their paper the most known active learning methods for image retrieval such as Bayes classification, k-Nearest Neighbors [51], neural networks [52,53], wavelet network [54,55], lattice trees [56][57][58], Gaussian mixtures and support vector machines. Ekta and Hardeep [59] proposed the use of bayesian algorithm, as a supervised learning and a statistical method for classification, by reducing the noise from images.…”
Section: Low-level Content Approachesmentioning
confidence: 99%
“…In other side, Philippe and Matthieu [50] introduced in their paper the most known active learning methods for image retrieval such as Bayes classification, k-Nearest Neighbors [51], neural networks [52,53], wavelet network [54,55], lattice trees [56][57][58], Gaussian mixtures and support vector machines. Ekta and Hardeep [59] proposed the use of bayesian algorithm, as a supervised learning and a statistical method for classification, by reducing the noise from images.…”
Section: Low-level Content Approachesmentioning
confidence: 99%
“…Having no previous knowledge about the location of the person in each video frame, the human action in a video stream can be recovered from a great number of local descriptors extracted from the video frames (Sekma et al, 2013), (Dammak et al, 2012), , (Sekma et al, 2014). Local descriptors, coupled with the bag-of-words (BOW) encoding method (Sivic and Zisserman, 2003) (Mejdoub et al, 2008) (Mejdoub et al, 2007) have recently become a very popular video representation (Ben Aoun et al, 2014), (Knopp et al, 2010), (Laptev et al, 2008), (Wang et al, 2009), (Alexander et al, 2008), (Wang et al, 2011), (Raptis and Soatto, 2010), (Pyry et al, 2010), (Jiang et al, 2012) and (Jain et al, 2013). The BOW uses a codebook to create a representation based on the visual content of a video, where the codebook is a set of visual words that represents the distribution of features of all the video.…”
Section: Intoductionmentioning
confidence: 99%
“…To increase precision, we initialize k-means 8 times and keep the result with the lowest error. The BOW [20,21,22] representation then assigns each feature to the closest vocabulary word features. The resulting histograms of visual word occurrences are used as video sequence representations.…”
Section: Experimental Studymentioning
confidence: 99%