2022
DOI: 10.48550/arxiv.2212.07207
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

MAELi -- Masked Autoencoder for Large-Scale LiDAR Point Clouds

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…This self-supervised training strategy does not depend on data-specific augmentations and is agnostic to the data modality. Starting from its success for natural language models such as BERT [7], it has made its way into vision [9,12,36], 3D data processing [14,16,38], and many other domains [4,10,13,21]. MAE also has been successfully employed in multimodal learning [4,5,11], yet, again they rely on different encoders for each data modality.…”
Section: Related Workmentioning
confidence: 99%
“…This self-supervised training strategy does not depend on data-specific augmentations and is agnostic to the data modality. Starting from its success for natural language models such as BERT [7], it has made its way into vision [9,12,36], 3D data processing [14,16,38], and many other domains [4,10,13,21]. MAE also has been successfully employed in multimodal learning [4,5,11], yet, again they rely on different encoders for each data modality.…”
Section: Related Workmentioning
confidence: 99%