Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing 2022
DOI: 10.18653/v1/2022.emnlp-main.35
|View full text |Cite
|
Sign up to set email alerts
|

RetroMAE: Pre-Training Retrieval-oriented Language Models Via Masked Auto-Encoder

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 25 publications
(18 citation statements)
references
References 0 publications
0
18
0
Order By: Relevance
“…We call this VideoDex-BC-Open. We ablate the type of visual representation and prior use by trying an initialization using the VGG16 network (Simonyan and Zisserman 2014) (VideoDex-VGG) and the MVP network (He et al, 2022; Xiao et al, 2022) (VideoDex-MVP) based representation trained for robot learning. We ablate the need for a two-stream policy, instead training a single NDP for both hand and wrist.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…We call this VideoDex-BC-Open. We ablate the type of visual representation and prior use by trying an initialization using the VGG16 network (Simonyan and Zisserman 2014) (VideoDex-VGG) and the MVP network (He et al, 2022; Xiao et al, 2022) (VideoDex-MVP) based representation trained for robot learning. We ablate the need for a two-stream policy, instead training a single NDP for both hand and wrist.…”
Section: Resultsmentioning
confidence: 99%
“…We compared using our approach with MVP (VideoDex-MVP) (Xiao et al, 2022) and VGG (VideoDex-VGG) (Simonyan and Zisserman 2014) and their performance was below VideoDex using Nair et al (2022). This is likely because both encoders are much larger than the ResNet18 (He et al, 2015) we use and require a lot more training time than feasible on human videos.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…However, a strong decoder may negatively impact sequence representation quality (Lu et al 2021). Recent approaches such as SimLM (Wang et al 2023) and RetroMAE (Xiao et al 2022) address this issue by adopting shallow decoders with limited past context access and enhanced decoding mechanisms.…”
Section: Related Workmentioning
confidence: 99%