2020
DOI: 10.48550/arxiv.2007.02038
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Low Rank Fusion based Transformers for Multimodal Sequences

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
19
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(19 citation statements)
references
References 0 publications
0
19
0
Order By: Relevance
“…The experimental results show that the proposed method achieves a comparable performance level to PMR (Lv et al 2021) on different metrics for the three datasets. Compared with LMF-MulT (Sahay et al 2020), which uses six transformer encoders, we achieve better performance on different datasets using half of the transformer encoders.…”
Section: Comparison With the State-of-the-artsmentioning
confidence: 99%
See 4 more Smart Citations
“…The experimental results show that the proposed method achieves a comparable performance level to PMR (Lv et al 2021) on different metrics for the three datasets. Compared with LMF-MulT (Sahay et al 2020), which uses six transformer encoders, we achieve better performance on different datasets using half of the transformer encoders.…”
Section: Comparison With the State-of-the-artsmentioning
confidence: 99%
“…Tsai et al (2019a) proposed a multimodal transformer (MulT) to learn intermodal correlations using a cross-modal attention mechanism. Sahay et al (2020) proposed low rank fusion based transformers (LMT-MULT) to design LMF units for efficient modal feature fusion based on previous work. Lv et al (2021) proposed progressive modality reinforcement (PMR) method.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations