2018
DOI: 10.26833/ijeg.373152
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Extraction of Building Boundaries From High Resolution Images With Active Contour Segmentation

Abstract: Building extraction from remotely sensed images plays an important role in many applications such as updating geographical information system, change detection, urban planning, disaster management and 3D building modeling. Automatic extraction of buildings from aerial images is not an easy task because of background complexity, lighting conditions and vegetation cover that reduces separability or visibility of buildings. As a result, automatic building extraction can be a complex process for computer vision an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 19 publications
0
4
0
Order By: Relevance
“…( 7) might be used. F-measure can be used to quantify the balance between accuracy and completeness (Samal et al 2004;Song et al 2011;Akbulut et al 2018;Hacar and Gökgöz, 2019).…”
Section: Evaluation Of the Resultsmentioning
confidence: 99%
“…( 7) might be used. F-measure can be used to quantify the balance between accuracy and completeness (Samal et al 2004;Song et al 2011;Akbulut et al 2018;Hacar and Gökgöz, 2019).…”
Section: Evaluation Of the Resultsmentioning
confidence: 99%
“…There are many methods in the literature for detail extraction and object classification studies (Ma et al 2017 ; Yiğit and Uysal 2019 ; Luo et al 2021 ; Sarıtürk et al 2020 ). The commonly used method for the classification of high-resolution images is object-based classification (Ge et al 2014 ; Akbulut et al 2018 ; Çömert et al 2019 ). Pixel-based approaches work on each pixel and extract information from remotely sensed data based on spectral information only (Gupta and Bhadauria 2014 ; Tehrany et al 2014 ; Khatami et al 2016 ; Louargant et al 2018 ).…”
Section: Methodsmentioning
confidence: 99%
“…Where, TP refers to an entity classified as an object that also corresponds to an object in the reference is classified as a true positive, FN (false negative) refers to an entity corresponds to an object in the reference that is classified as background, FP (false positive) refers to an entity classified as an object that does not correspond to an object in the reference and TN (true negative) refers to an entity belongs to the background both in the classification and in the reference data (Rutzinger et al 2009, Karsli et al 2016, Akbulut et al 2018. Reference data was created from the LiDAR point cloud by manually selecting tree points.…”
Section: Tp Completenessmentioning
confidence: 99%