2013 IEEE Conference on Computer Vision and Pattern Recognition 2013
DOI: 10.1109/cvpr.2013.61
|View full text |Cite
|
Sign up to set email alerts
|

Multi-task Sparse Learning with Beta Process Prior for Action Recognition

Abstract: In this paper, we formulate human action recognition as a novel Multi-Task Sparse Learning(MTSL)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
11
0

Year Published

2014
2014
2017
2017

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 37 publications
(11 citation statements)
references
References 23 publications
0
11
0
Order By: Relevance
“…MTSL approach combined multiple features efficiently to improve the recognition performance. Its robust sparse coding technique [89]. Details of the parameters can be found in the paper mines correlations between different tasks to obtain a shared sparsity pattern which is ignored if each task is learned individually.…”
Section: Multi-task Sparse Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…MTSL approach combined multiple features efficiently to improve the recognition performance. Its robust sparse coding technique [89]. Details of the parameters can be found in the paper mines correlations between different tasks to obtain a shared sparsity pattern which is ignored if each task is learned individually.…”
Section: Multi-task Sparse Learningmentioning
confidence: 99%
“…Multi-Task Sparse Learning (MTSL) [89] aims to construct a given test sample with multiple features and very few bases. In this framework, each feature modality is considered as a single task in MTSL to learn a sparse representation.…”
Section: Multi-task Sparse Learningmentioning
confidence: 99%
“…To model complex task dependencies several clustered multi-task learning methods have been introduced [31]- [33]. In computer vision, MTL have been previously proposed in the context of visual-based activity recognition from fixed cameras and in a supervised setting [12], [13], [34]. In this paper, we consider the more challenging FPV scenario where no annotated data are provided.…”
Section: B Supervised Multi-task Learningmentioning
confidence: 99%
“…Application scenarios include content-based video retrieval, intelligent video surveillance, and human-computer interaction. Although many researchers have done a long-term study in this work, it remains challenging to recognize human actions in videos not only because of geometric variations between intra-class objects or actions, but also because of changes in scale, rotation, viewpoint, illumination, and occlusion [1].…”
Section: Introductionmentioning
confidence: 99%