2021
DOI: 10.3390/rs13091661
|View full text |Cite
|
Sign up to set email alerts
|

Large-Scale, Multiple Level-of-Detail Change Detection from Remote Sensing Imagery Using Deep Visual Feature Clustering

Abstract: In the era of big data, where massive amounts of remotely sensed imagery can be obtained from various satellites accompanied by the rapid change in the surface of the Earth, new techniques for large-scale change detection are necessary to facilitate timely and effective human understanding of natural and human-made phenomena. In this research, we propose a chip-based change detection method that is enabled by using deep neural networks to extract visual features. These features are transformed into deep orthog… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 9 publications
(1 citation statement)
references
References 66 publications
0
1
0
Order By: Relevance
“…The literature [27]- [30] introduces the visual cone clipping algorithm in the real-time rendering of large-scale terrain, which performs the necessary clipping of data objects according to the change in the viewpoint range to load data objects quickly and accurately. In addition, level-of-detail (LOD) technology [31]- [34] reduces the detailed information of data according to viewpoint position and object distance, thus improving the rendering efficiency.…”
Section: Introductionmentioning
confidence: 99%
“…The literature [27]- [30] introduces the visual cone clipping algorithm in the real-time rendering of large-scale terrain, which performs the necessary clipping of data objects according to the change in the viewpoint range to load data objects quickly and accurately. In addition, level-of-detail (LOD) technology [31]- [34] reduces the detailed information of data according to viewpoint position and object distance, thus improving the rendering efficiency.…”
Section: Introductionmentioning
confidence: 99%