2018
DOI: 10.1007/978-3-030-01264-9_11
|View full text |Cite
|
Sign up to set email alerts
|

Graph Distillation for Action Detection with Privileged Modalities

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
75
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 103 publications
(75 citation statements)
references
References 45 publications
0
75
0
Order By: Relevance
“…al. [10] addressed a similar problem to ours, where the model is first trained on several modalities (RGB, depth, joints and infrared), but tested only in one. A graph-based distillation method able to distill information from all modalities at training time is proposed, while also passing through a validation phase on a subset of modalities.…”
Section: Related Workmentioning
confidence: 99%
“…al. [10] addressed a similar problem to ours, where the model is first trained on several modalities (RGB, depth, joints and infrared), but tested only in one. A graph-based distillation method able to distill information from all modalities at training time is proposed, while also passing through a validation phase on a subset of modalities.…”
Section: Related Workmentioning
confidence: 99%
“…The performance values that refer to the Hoffman et al method [10] (row #20 of Table 4) are taken from the implementation and experiments in [11]. Row #21 refers to the method by Luo and colleagues [21], that uses 6 modalities at training time (RGB, depth, optical flow, and three different encoding methods for skeleton data), and RGB only at test time.…”
Section: Action Recognition Performance and Comparisonsmentioning
confidence: 99%
“…At testing, it used the softmax to select the final prediction between the predictions from the hallucination representation and the predictions from RGB representation. Luo et al [15] recently proposed graph distillation for action detection with privileged modalities (RGB, depth, skeleton, and flow), where a novel graph distillation layer was used to dynamically learn to distill knowledge from the most effective modality, depending on the type of action. In our case, we use paired depth and RGB images during training.…”
Section: Related Workmentioning
confidence: 99%