2022
DOI: 10.3390/rs14194762
|View full text |Cite
|
Sign up to set email alerts
|

Object-Oriented Canopy Gap Extraction from UAV Images Based on Edge Enhancement

Abstract: Efficient and accurate identification of canopy gaps is the basis of forest ecosystem research, which is of great significance to further forest monitoring and management. Among the existing studies that incorporate remote sensing to map canopy gaps, the object-oriented classification has proved successful due to its merits in overcoming the problem that the same object may have different spectra while different objects may have the same spectra. However, mountainous land cover is unusually fragmented, and the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 55 publications
0
4
0
Order By: Relevance
“…It was confirmed that the transferability of the CNN models was better than that of the DF models. We speculate that this is because although the model construction idea of deep forest is similar to that of deep learning model with multiple levels, it is still a tree-based machine learning model in essence, its performance in transfer learning cannot be comparable to that of deep learning model, even though it was proven to perform well in the case of few-shot learning [59], and other studies have also only demonstrated the effectiveness of machine learning algorithms in a single region [18,27,32,35,36]. However, considering the results shown in Figure 11 and Table 7, the performance of DF models in a single region was comparable to that of CNN models, but the transferability was very poor, we assumed that there may be overfitting in DF models [72], undermining the accuracy of transfer learning.…”
Section: Advantages and Potential Of The Forest Gap Extraction Modelmentioning
confidence: 99%
See 2 more Smart Citations
“…It was confirmed that the transferability of the CNN models was better than that of the DF models. We speculate that this is because although the model construction idea of deep forest is similar to that of deep learning model with multiple levels, it is still a tree-based machine learning model in essence, its performance in transfer learning cannot be comparable to that of deep learning model, even though it was proven to perform well in the case of few-shot learning [59], and other studies have also only demonstrated the effectiveness of machine learning algorithms in a single region [18,27,32,35,36]. However, considering the results shown in Figure 11 and Table 7, the performance of DF models in a single region was comparable to that of CNN models, but the transferability was very poor, we assumed that there may be overfitting in DF models [72], undermining the accuracy of transfer learning.…”
Section: Advantages and Potential Of The Forest Gap Extraction Modelmentioning
confidence: 99%
“…Most of the forest gap extraction studies based on airborne LiDAR and high-resolution multi-spectral data have used the object-based image analysis (OBIA) approach to segment then classify forest gap [27,34,35]. These studies are premised on accurate segmentation and extraction of forest gap boundaries, and therefore rely heavily on high-quality CHM data derived from high-accuracy LiDAR data, thus, their classification accuracy of forest gaps may be reduced in the absence of high-accuracy CHM data [36].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The field grape validation set was classified using SVM [45], ML [46], RF [47], and ISO DATA, and the classification results are shown in Figure 8 and Table 6. The results showed that the cultivation of grapes in the field was complex, containing not only non-crops, such as barns and land, but also green vegetation, such as maize and grape weeds.…”
Section: Research On Traditional Grape Land Information Extraction Me...mentioning
confidence: 99%