2017
DOI: 10.1016/j.cviu.2017.05.005
|View full text |Cite
|
Sign up to set email alerts
|

Automatic action annotation in weakly labeled videos

Abstract: Manual spatio-temporal annotation of human action in videos is laborious, requires several annotators and contains human biases. In this paper, we present a weakly supervised approach to automatically obtain spatio-temporal annotations of an actor in action videos. We first obtain a large number of action proposals in each video. To capture a few most representative action proposals in each video and evade processing thousands of them, we rank them using optical flow and saliency in a 3D-MRF based framework an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(1 citation statement)
references
References 45 publications
(85 reference statements)
0
1
0
Order By: Relevance
“…Recognizing human actions is a fundamental task in many areas and applications, such as surveillance and crowd control [1], automatic annotation of human actions in videos [2] and video indexing [3], analysis of sports videos [4], HCI applications [5] and gesture-based video games interaction [6], among other examples [7]. Nevertheless, such applications hardly offer ideal conditions concerning environmental factors, which difficult the action characterization.…”
Section: Introductionmentioning
confidence: 99%
“…Recognizing human actions is a fundamental task in many areas and applications, such as surveillance and crowd control [1], automatic annotation of human actions in videos [2] and video indexing [3], analysis of sports videos [4], HCI applications [5] and gesture-based video games interaction [6], among other examples [7]. Nevertheless, such applications hardly offer ideal conditions concerning environmental factors, which difficult the action characterization.…”
Section: Introductionmentioning
confidence: 99%