2022
DOI: 10.5194/essd-2022-155
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

MDAS: A New Multimodal Benchmark Dataset for Remote Sensing

Abstract: Abstract. In Earth observation, multimodal data fusion is an intuitive strategy to break the limitation of individual data. Complementary physical contents of data sources allow comprehensive and precise information retrieve. With current satellite missions, such as ESA Copernicus programme, various data will be accessible at an affordable cost. Future applications will have many options on data sources. Such privilege can be beneficial only if algorithms are ready to work with various data sources. However, c… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 35 publications
0
2
0
Order By: Relevance
“…The data are accessible at https://doi.org/10.14459/ 2022mp1657312 with a CC BY-SA 4.0 license (Hu et al, 2022a), and the code (including the pre-trained models) is at https://doi.org/10.5281/zenodo.7428215 (Hu et al, 2022b). Also, the live repository is available at https://github.…”
Section: Code and Data Availabilitymentioning
confidence: 99%
See 1 more Smart Citation
“…The data are accessible at https://doi.org/10.14459/ 2022mp1657312 with a CC BY-SA 4.0 license (Hu et al, 2022a), and the code (including the pre-trained models) is at https://doi.org/10.5281/zenodo.7428215 (Hu et al, 2022b). Also, the live repository is available at https://github.…”
Section: Code and Data Availabilitymentioning
confidence: 99%
“…Our experiments demonstrate the performance of representative state-of-the-art algorithms whose outcomes can serve as baselines for further studies. The dataset is publicly available at https://doi.org/10.14459/2022mp1657312 (Hu et al, 2022a) and the code (including the pre-trained models) at https://doi.org/10.5281/zenodo.7428215 (Hu et al, 2022b).…”
mentioning
confidence: 99%