2023
DOI: 10.1016/j.neunet.2023.03.003
|View full text |Cite
|
Sign up to set email alerts
|

COM: Contrastive Masked-attention model for incomplete multimodal learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 55 publications
0
2
0
Order By: Relevance
“…Researchers can explore different fusion strategies, such as early fusion (combining raw data), late fusion (combining model outputs), or hybrid fusion (combining intermediate representations), based on the nature of data and task at hand (Figure 5C). 84 Thirdly, in some cases, certain modalities may have incomplete or missing data, i.e., modality missingness, posing difficulties for data fusion and online use of models. 85 Solutions to address modality missingness include data interpolation/ imputation, information transfer, leveraging knowledge and priors, and integrating multimodal features.…”
Section: Multimodal Data Fusion Algorithmsmentioning
confidence: 99%
“…Researchers can explore different fusion strategies, such as early fusion (combining raw data), late fusion (combining model outputs), or hybrid fusion (combining intermediate representations), based on the nature of data and task at hand (Figure 5C). 84 Thirdly, in some cases, certain modalities may have incomplete or missing data, i.e., modality missingness, posing difficulties for data fusion and online use of models. 85 Solutions to address modality missingness include data interpolation/ imputation, information transfer, leveraging knowledge and priors, and integrating multimodal features.…”
Section: Multimodal Data Fusion Algorithmsmentioning
confidence: 99%
“… 48 However, alongside the growth in the popularity of attention mechanisms within architectures such as Transformers, there has also come an exploration of how attention, with built-in support for masking certain inputs can be used to handle missing data. 49 , 50 , 51 Our framework leverages a similar approach via a cross-attention mechanism that allows us to entirely bypass data imputation and the need for complete data in model training.…”
Section: Introductionmentioning
confidence: 99%