2018
DOI: 10.1016/j.isprsjprs.2018.02.006
|View full text |Cite
|
Sign up to set email alerts
|

Building instance classification using street view images

Abstract: This is the pre-print version, to read the final version please go to ISPRS Journal of Photogrammetry and Remote Sensing, Elsevier. (https://doi.org/DOI: 10.1016/j.isprsjprs.2018.02.006). Land-use classification based on spaceborne or aerial remote sensing images has been extensively studied over the past decades. Such classification is usually a patch-wise or pixel-wise labeling over the whole image. But for many applications, such as urban population density mapping or urban utility planning, a classificatio… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
158
1
4

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 272 publications
(164 citation statements)
references
References 56 publications
1
158
1
4
Order By: Relevance
“…To cope with the aforementioned challenge of generalization, the 2017 Geoscience and Remote Sensing Society (GRSS) data fusion contest of the year 2017 proposed training the classifier on five cities (Berlin, Hong Kong, Paris, Rome, and Sao Paulo) and testing the results on four other cities (Amsterdam, Chicago, Madrid, and Xi'an). Although deep learning-based classification methods have proven to be strong in terms of classification accuracy a generalization capability in the remote sensing community [21][22][23][24], the ensemble-based canonical correlation forest (CCF) classification strategy achieved the best performance in the contest, among more than 800 submissions [8,25]. Therefore, this work uses the CCF classifier to pursue a solution for our task.…”
Section: Introductionmentioning
confidence: 99%
“…To cope with the aforementioned challenge of generalization, the 2017 Geoscience and Remote Sensing Society (GRSS) data fusion contest of the year 2017 proposed training the classifier on five cities (Berlin, Hong Kong, Paris, Rome, and Sao Paulo) and testing the results on four other cities (Amsterdam, Chicago, Madrid, and Xi'an). Although deep learning-based classification methods have proven to be strong in terms of classification accuracy a generalization capability in the remote sensing community [21][22][23][24], the ensemble-based canonical correlation forest (CCF) classification strategy achieved the best performance in the contest, among more than 800 submissions [8,25]. Therefore, this work uses the CCF classifier to pursue a solution for our task.…”
Section: Introductionmentioning
confidence: 99%
“…1 Initialization: Θ = 0, G = 0, Λ 1 = 0, Λ 2 = 0, µ = 10 −3 , µ max = 10 6 , ρ = 1.5, ε = 10 −6 , t = 1. 2 while not converged or t > maxIter do 3 Fix other variables to update J by J = (P T P + µI) −1 (P T Y + µΘ X − Λ 1 ). 4 Fix other variables to update Θ by…”
Section: Problem Formulationmentioning
confidence: 99%
“…Multispectral (MS) imagery has been receiving an increasing interest in the urban area (e.g. a large-scale land-cover mapping [1] [2], building localization [3]), agriculture [4], and mineral products [5], as operational optical broadband (multispectral) satellites (e.g. Sentinel-2 and Landsat-8 [6]) enable the multispectral imagery openly available on a global scale.…”
Section: Introductionmentioning
confidence: 99%
“…Thus, using aerial images has a limitation that it is infeasible to generate a fine-grained land use map, which is referred as the "semantic gap" problem. Some recent efforts [9], [10] propose to use other data sources that can bring more information about the internal structure or human activities of the land to close the semantic gap, such as point of interest (POI) [11], street view [12], mobile phone data [13] and social multimedia [14]. However, approaches that depend on POI, street view, mobile records and textual tweets may also face the problem that we have limited observation inside the building.…”
Section: Introductionmentioning
confidence: 99%