2022
DOI: 10.3390/app122412942
|View full text |Cite
|
Sign up to set email alerts
|

GeoBERT: Pre-Training Geospatial Representation Learning on Point-of-Interest

Abstract: Thanks to the development of geographic information technology, geospatial representation learning based on POIs (Point-of-Interest) has gained widespread attention in the past few years. POI is an important indicator to reflect urban socioeconomic activities, widely used to extract geospatial information. However, previous studies often focus on a specific area, such as a city or a district, and are designed only for particular tasks, such as land-use classification. On the other hand, large-scale pre-trained… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 30 publications
0
2
0
Order By: Relevance
“…DAMO Academy and Gaode collaboratively launched the MGeo model. This model is a multitask, multimodal geographic text pretraining base model, which enhances performance across various downstream geographic text processing tasks [49]. In remote sensing, the Beijing Institute of Technology research team proposed the pioneering MLLM EarthGPT model, which unifies and integrates various sensor remote sensing interpretation tasks [50].…”
Section: Related Workmentioning
confidence: 99%
“…DAMO Academy and Gaode collaboratively launched the MGeo model. This model is a multitask, multimodal geographic text pretraining base model, which enhances performance across various downstream geographic text processing tasks [49]. In remote sensing, the Beijing Institute of Technology research team proposed the pioneering MLLM EarthGPT model, which unifies and integrates various sensor remote sensing interpretation tasks [50].…”
Section: Related Workmentioning
confidence: 99%
“…For the NSP task, BERT model is pre-trained on representations of pairs of texts to predict a sequence from the previous sequence. BERT has also been pre-trained for other areas of knowledge, such as vision [13,14] bioinformatics and computational biology [15][16][17], or geospatial representation learning based on a point of interest [18].…”
Section: Introductionmentioning
confidence: 99%