2015
DOI: 10.1007/978-3-319-16817-3_34
|View full text |Cite
|
Sign up to set email alerts
|

Recognizing Daily Activities from First-Person Videos with Multi-task Clustering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
11
0

Year Published

2015
2015
2016
2016

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 22 publications
0
11
0
Order By: Relevance
“…Multi-task learning [40][41][42][43][44] uses multiple types of data from multiple views to learn the multiple related tasks to improve the accuracy of classification and regression models. Most of these approaches use multiple types of features of same modality for daily activity detection.…”
Section: Multimedia Data Representationmentioning
confidence: 99%
“…Multi-task learning [40][41][42][43][44] uses multiple types of data from multiple views to learn the multiple related tasks to improve the accuracy of classification and regression models. Most of these approaches use multiple types of features of same modality for daily activity detection.…”
Section: Multimedia Data Representationmentioning
confidence: 99%
“…Multi-task learning has received considerable attention from the vision community, and has been successfully applied to many problems such as image classification [36], visual tracking [37], daily activity recognition from first-person videos [38], image-based indoor localization [39] and head pose classification under motion [40]- [42]. An MTL approach to monocular action recognition is proposed in [43], where the authors exploit relatedness of action categories to learn latent tasks (motion patterns) shared across actions.…”
Section: Multi-task Learningmentioning
confidence: 99%
“…In this work low-level features are employed, such as line alignment feature, ciated with geodesic flow kernel (GFK), it can predict the snap point confidence. A multi-task clustering approach is proposed in [42,43,44] for daily activity recognition in egocentric videos.…”
mentioning
confidence: 99%