1997
DOI: 10.1109/34.598227
|View full text |Cite
|
Sign up to set email alerts
|

Probabilistic visual learning for object representation

Abstract: We present an unsupervised technique for visual learning, which is based on density estimation in high-dimensional spaces using an eigenspace decomposition. Two types of density estimates are derived for modeling the training data: a multivariate Gaussian (for unimodal distributions) and a Mixture-of-Gaussians model (for multimodal distributions). These probability densities are then used to formulate a maximum-likelihood estimation framework for visual search and target detection for automatic object recognit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
703
0
4

Year Published

1999
1999
2010
2010

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 1,219 publications
(710 citation statements)
references
References 35 publications
3
703
0
4
Order By: Relevance
“…the Gaussian mixture model allows the soft partition of data points in proportion to the responsibility defined by the posterior probability which indicates the relative probability that the data was derived from each class. Moghaddam and Pentland (1996), for example, presented a method that combines the principal component analysis with a Gaussian mixture model for object recognition and detection. As the dimensionality of object images is in general very high in the input space, the method first finds a principal sub-space, and then the Gaussian mixture model is applied to estimate the multimodal densities within the principal sub-space.…”
Section: Gaussian Mixture Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…the Gaussian mixture model allows the soft partition of data points in proportion to the responsibility defined by the posterior probability which indicates the relative probability that the data was derived from each class. Moghaddam and Pentland (1996), for example, presented a method that combines the principal component analysis with a Gaussian mixture model for object recognition and detection. As the dimensionality of object images is in general very high in the input space, the method first finds a principal sub-space, and then the Gaussian mixture model is applied to estimate the multimodal densities within the principal sub-space.…”
Section: Gaussian Mixture Modelsmentioning
confidence: 99%
“…Poggio & Edelman, 1990;Murase & Nayar, 1995;Moghaddam & Pentland, 1996). This approach contrasts with a conventional approach that constructs structural descriptions of objects using 3D volumetric primitives in the object-centered coordinate frame (e.g.…”
Section: Introductionmentioning
confidence: 99%
“…The subsequent swimmer detection is based on the approach of Shechtman 16 who also introduced the self-similarity descriptors. The underlying probabilistic matching approach was suggested by Moghaddam et al 17 Furthermore, we compare the performance of the self-similarity features to several alternative feature descriptors such as SIFT, 18 Geometric Blur 19 and HOG. 20 Finally, three-dimensional joint positions are estimated by applying the method of Taylor.…”
Section: Related Workmentioning
confidence: 99%
“…Sparse structuring of statistical dependency explains the empirical success of "parts-based" methods for face detection and face recognition [1][2] [3][4] [5]. Such parts-based methods concentrate modeling power on localized regions, in which dependencies tend to be strong and use weaker models over larger areas in which dependencies tend to be less significant.…”
Section: Introductionmentioning
confidence: 99%