2022
DOI: 10.1007/978-3-031-18913-5_48
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal Violent Video Recognition Based on Mutual Distillation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 32 publications
0
2
0
Order By: Relevance
“…Shang et al [ 83 ] divided the related work into deep learning models, supervised and unsupervised learning, knowledge transfer (knowledge distillation), and multimodal learning. Firstly, the authors proposed to transfer information from large datasets to small violent datasets based on mutual distillation with a pre-trained self-supervised model for RGB vital features.…”
Section: Selected Article Descriptionmentioning
confidence: 99%
See 1 more Smart Citation
“…Shang et al [ 83 ] divided the related work into deep learning models, supervised and unsupervised learning, knowledge transfer (knowledge distillation), and multimodal learning. Firstly, the authors proposed to transfer information from large datasets to small violent datasets based on mutual distillation with a pre-trained self-supervised model for RGB vital features.…”
Section: Selected Article Descriptionmentioning
confidence: 99%
“…The dataset developed by Rachna et al [ 105 ] contains YouTube videos, stock films, and self-recorded violent videos. The Violent Clip Dataset (VCD) [ 83 ] contains 37 Hollywood movies and 30 YouTube video clips. The dataset created by Hung et al [ 42 ] includes volunteers acting out different scenarios, such as elderly patients with lower limb disabilities.…”
Section: Description Of the Analysis Of The Selected Itemsmentioning
confidence: 99%