2015
DOI: 10.1007/s00138-015-0715-9
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic view selection for multi-camera action recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 35 publications
0
6
0
Order By: Relevance
“…In the last decade, conventional CV approaches have dominated the MVHAR field; they represented human body configuration using 2D, 3D, and 4D models. Methods using 2D models extracted silhouettes and optical flow from sequences of images for direct classification [9,15] or transformation to higher-level features [1,18,38,39]. High-level features such as silhouettes contour points and centers of mass [15] showed superiority over other methods employing 2D data [18] to encode movement in human action.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…In the last decade, conventional CV approaches have dominated the MVHAR field; they represented human body configuration using 2D, 3D, and 4D models. Methods using 2D models extracted silhouettes and optical flow from sequences of images for direct classification [9,15] or transformation to higher-level features [1,18,38,39]. High-level features such as silhouettes contour points and centers of mass [15] showed superiority over other methods employing 2D data [18] to encode movement in human action.…”
Section: Related Workmentioning
confidence: 99%
“…The conventional methods require sophisticated features extraction to identify informative features from raw data [14][15][16]. The features extractor usually works independently from the classifier [17,18]. Studies based on this approach focus either on the classifier or on feature engineering [19,20].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Previous research has investigated both recognition from multiple views and discriminative sensor and view selection. Multi-view recognition methods were developed to recognize objects [4], gait [5], and actions [6] from multiple points of view. The previous approaches are generally based on visual features manually engineered for their specific applications.…”
Section: Introductionmentioning
confidence: 99%
“…Examples of the 2D methods are layer-based circular representation of human model structure [ 48 ], bag-of-visual-words using spatial-temporal interest points for human modeling and classification [ 49 ], view-invariant action masks and movement representation [ 50 ], R-transform features [ 51 ], silhouette feature space with PCA [ 52 ], low-level characteristics of human features [ 53 ], combination of optical-flow histograms and bag-of-interest-point-words using transition HMMs [ 54 ], contour-based and uniform local binary pattern with SVM [ 55 ], multifeatures with key poses learning [ 56 ], dimension-reduced silhouette contours [ 57 ], action map using linear discriminant analysis on multiview action images [ 58 ], posture prototype map using self-organizing map with voting function and Bayesian framework [ 59 ], multiview action learning using convolutional neural networks with long short term memory [ 60 ], and multiview action recognition with an autoencoder neural network for learning view-invariant features [ 61 ].…”
Section: Introductionmentioning
confidence: 99%