2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2023
DOI: 10.1109/cvpr52729.2023.01809
|View full text |Cite
|
Sign up to set email alerts
|

Hybrid Active Learning via Deep Clustering for Video Action Detection

Aayush J Rana,
Yogesh S Rawat
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 39 publications
0
3
0
Order By: Relevance
“…Implementation details: We use the PyTorch to build our models and train them on single 16GB GPU. For action detection, we use VideoCapsuleNet (Duarte, Rawat, and Shah 2018;Kumar and Rawat 2022;Rana and Rawat 2022), with margin-loss for classification and BCE loss for detection. The network input is 8 RGB frames of size 224 × 224.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Implementation details: We use the PyTorch to build our models and train them on single 16GB GPU. For action detection, we use VideoCapsuleNet (Duarte, Rawat, and Shah 2018;Kumar and Rawat 2022;Rana and Rawat 2022), with margin-loss for classification and BCE loss for detection. The network input is 8 RGB frames of size 224 × 224.…”
Section: Methodsmentioning
confidence: 99%
“…Active learning (Pardo et al 2021) enables selecting samples for annotation by estimating the usefulness of each sample to the underlying task. It is used to iteratively select a subset of data for annotation on various tasks as image classification (Wang et al 2016), image object detection (Aghdam et al 2019;Pardo et al 2021) and video temporal localization (Heilbron et al 2018) with only few studies done for video action detection (Rana and Rawat 2022). The sample selection in AL is done using uncertainty (Liu et al 2019), entropy (Aghdam et al 2019), core-set selection (Sener and Savarese 2017) or mutual-information (Kirsch, Van Amersfoort, and Gal 2019).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation