2013 IEEE International Conference on Computer Vision 2013
DOI: 10.1109/iccv.2013.388
|View full text |Cite
|
Sign up to set email alerts
|

Latent Multitask Learning for View-Invariant Action Recognition

Abstract: This paper presents an approach to view-invariant action recognition, where human poses and motions exhibit large variations across different camera viewpoints. When each viewpoint of a given set of action classes is specified as a learning task then multitask learning appears suitable for achieving view invariance in recognition. We extend the standard multitask learning to allow identifying: (1) latent groupings of action views (i.e., tasks), and (2) discriminative action parts, along with joint learning of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
26
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 31 publications
(26 citation statements)
references
References 24 publications
0
26
0
Order By: Relevance
“…To our knowledge, multi-view action recognition using multi-task learning has not been considered by other works with the exception of [21], which is contemporaneous to ours. Also, our approach is different with respect to [21] in the following respects: (1) While [21] seeks to learn latent action groups, so that within-group feature sharing is allowed but between-group feature sharing is prohibited, we explore learning of latent and discriminative SSM features across views; (2) A part-based action representation is used in [21], while we use the bag-of-words model for encoding SSM features; (3) A large-margin framework is used for LMTL formulation in [21], while we propose LDA-based MTL, and (4) While in [21] the main focus is multi-view action recognition, we also consider the problem of action recognition with missing view, i.e. on a novel camera view for which no examples are available in the training set.…”
Section: Multi-task Learningmentioning
confidence: 87%
See 4 more Smart Citations
“…To our knowledge, multi-view action recognition using multi-task learning has not been considered by other works with the exception of [21], which is contemporaneous to ours. Also, our approach is different with respect to [21] in the following respects: (1) While [21] seeks to learn latent action groups, so that within-group feature sharing is allowed but between-group feature sharing is prohibited, we explore learning of latent and discriminative SSM features across views; (2) A part-based action representation is used in [21], while we use the bag-of-words model for encoding SSM features; (3) A large-margin framework is used for LMTL formulation in [21], while we propose LDA-based MTL, and (4) While in [21] the main focus is multi-view action recognition, we also consider the problem of action recognition with missing view, i.e. on a novel camera view for which no examples are available in the training set.…”
Section: Multi-task Learningmentioning
confidence: 87%
“…Our approach achieves higher recognition performance, in terms of both single-view and (average) multi-view accuracies, as compared to most previous methods. While the approaches proposed in [16], [21], and [55] achieve higher recognition as compared to our methods, they suffer from other limitations. The algorithm in [55] is based on latent kernelized structural SVM which is intractable for inference on largescale datasets.…”
Section: Quantitative Evaluationmentioning
confidence: 95%
See 3 more Smart Citations