2009 IEEE Conference on Computer Vision and Pattern Recognition 2009
DOI: 10.1109/cvprw.2009.5206847
|View full text |Cite
|
Sign up to set email alerts
|

View-invariant dynamic texture recognition using a bag of dynamical systems

Abstract: In this paper, we consider the problem of categorizing videos of dynamic textures under varying view-point. We propose to model each video with a collection of Linear Dynamics Systems (LDSs) describing the dynamics of spatiotemporal video patches. This bag of systems (BoS) representation is analogous to the bag of features (BoF) representation, except that we use LDSs as feature descriptors. This poses several technical challenges to the BoF framework. Most notably, LDSs do not live in a Euclidean space, hence… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
65
0

Year Published

2011
2011
2019
2019

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 27 publications
(65 citation statements)
references
References 6 publications
0
65
0
Order By: Relevance
“…To recognize dynamic features, Ravichandran et al [17] model each video with a collection of motion primitives that describe the dynamics of spatiotemporal video patches. This bag-of-motion-primitives (BoMP) approach is similar to the work on using the bag-of-features (BoF) for object recognition which categorizes images by observing the distribution of a small collection of features.…”
Section: Related Work and Problem Contextmentioning
confidence: 99%
“…To recognize dynamic features, Ravichandran et al [17] model each video with a collection of motion primitives that describe the dynamics of spatiotemporal video patches. This bag-of-motion-primitives (BoMP) approach is similar to the work on using the bag-of-features (BoF) for object recognition which categorizes images by observing the distribution of a small collection of features.…”
Section: Related Work and Problem Contextmentioning
confidence: 99%
“…The generative methods [2,8,12,21,22,24,28] attempt to quantitatively model the underlying physical dynamic system that generates DT sequences and classify DT sequences based on the system parameters of the corresponding physical model. For example, in [24], each pixel is expressed as a linear combination of the neighboring pixels in the spatio-temporal domain.…”
Section: Previous Workmentioning
confidence: 99%
“…Alternatively, discriminative methods [21,27,31] have been proposed for DT classification without explicitly modeling the underlying dynamic system. In [27] spatiotemporal filters are constructed specifically tuned up for certain local DT structures with a few image patterns and motion patterns.…”
Section: Previous Workmentioning
confidence: 99%
See 2 more Smart Citations