2017
DOI: 10.1016/j.isprsjprs.2017.09.007
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical semantic cognition for urban functional zones with VHR satellite images and POI data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
136
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 221 publications
(137 citation statements)
references
References 46 publications
0
136
1
Order By: Relevance
“…Firstly, low-level features, such as spectral, geometrical, and textural image features, are widely used in image analyses [11], but they are weak in characterizing functional zones which are usually composed of diverse objects with variant characteristics [12]. Then, middle-level features, including object semantics [4,8], visual elements [7], and bag-of-visual-word (BOVW) representations [13], are more effective than low-level features in representing functional zones [7], but they ignore spatial and contextual information of objects, leading to inaccurate recognition results. To resolve this issue, Hu et al (2015) extracted high-level features using convolutional neural network (CNN) [10], which could measure contextual information and were more robust than visual features in recognizing functional zones [14,16].…”
Section: Technical Issuesmentioning
confidence: 99%
See 4 more Smart Citations
“…Firstly, low-level features, such as spectral, geometrical, and textural image features, are widely used in image analyses [11], but they are weak in characterizing functional zones which are usually composed of diverse objects with variant characteristics [12]. Then, middle-level features, including object semantics [4,8], visual elements [7], and bag-of-visual-word (BOVW) representations [13], are more effective than low-level features in representing functional zones [7], but they ignore spatial and contextual information of objects, leading to inaccurate recognition results. To resolve this issue, Hu et al (2015) extracted high-level features using convolutional neural network (CNN) [10], which could measure contextual information and were more robust than visual features in recognizing functional zones [14,16].…”
Section: Technical Issuesmentioning
confidence: 99%
“…To resolve this issue, Hu et al (2015) extracted high-level features using convolutional neural network (CNN) [10], which could measure contextual information and were more robust than visual features in recognizing functional zones [14,16]. Zhang et al (2017) had a different opinion on the relevance of deep-learning features and stated that these features rarely had geographic meaning and were weak for the purpose of interpretability [4]. Additionally, the size and shape of the convolution window can influence deep-learning features.…”
Section: Technical Issuesmentioning
confidence: 99%
See 3 more Smart Citations