2020
DOI: 10.1109/tpami.2019.2929038
|View full text |Cite
|
Sign up to set email alerts
|

Learning with Privileged Information via Adversarial Discriminative Modality Distillation

Abstract: Heterogeneous data modalities can provide complementary cues for several tasks, usually leading to more robust algorithms and better performance. However, while training data can be accurately collected to include a variety of sensory modalities, it is often the case that not all of them are available in real life (testing) scenarios, where a model has to be deployed. This raises the challenge of how to extract information from multimodal data in the training stage, in a form that can be exploited at test time… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
24
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 54 publications
(24 citation statements)
references
References 53 publications
0
24
0
Order By: Relevance
“…In every case, Input Dropout performs better while being simpler than the other approaches. Note that we have not compared our approach to "modality distillation" [8] here since the method cannot be applied to this scenario. Indeed, it would involve training a network to dehaze a depth (or segmentation) image, which would require hallucinating scene contents.…”
Section: Input Dropout For Image Dehazingmentioning
confidence: 99%
See 4 more Smart Citations
“…In every case, Input Dropout performs better while being simpler than the other approaches. Note that we have not compared our approach to "modality distillation" [8] here since the method cannot be applied to this scenario. Indeed, it would involve training a network to dehaze a depth (or segmentation) image, which would require hallucinating scene contents.…”
Section: Input Dropout For Image Dehazingmentioning
confidence: 99%
“…We evaluate the use of Input Dropout for image classification using RGB+D training data. For this, we rely on the methodology proposed by Garcia et al [8], who use the crops of individual objects from the NYU V2 dataset [21] adapted by [13] for object classification using RGB+D. We used the same split as in [8]: 4,600 RGB-D images in total, where around 50% are used for training and the remainder for testing.…”
Section: Input Dropout For Classificationmentioning
confidence: 99%
See 3 more Smart Citations