2011
DOI: 10.1080/01431160903463684
|View full text |Cite
|
Sign up to set email alerts
|

Image fusion for enhanced forest structural assessment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
6
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(7 citation statements)
references
References 46 publications
1
6
0
Order By: Relevance
“…Concerning this, the results here are in line with other reports on forestry applications of spatially enhanced remote sensing data on a medium scale, in which no significant gain (except for an improved visual interpretability) has been achieved through different spatial enhancement techniques (e.g. Roberts et al, 2011). This also supports a recent statement of Witharana et al (2014) on the necessity of further research to understand the synergies of data fusion and image segmentation.…”
Section: Discussionsupporting
confidence: 92%
“…Concerning this, the results here are in line with other reports on forestry applications of spatially enhanced remote sensing data on a medium scale, in which no significant gain (except for an improved visual interpretability) has been achieved through different spatial enhancement techniques (e.g. Roberts et al, 2011). This also supports a recent statement of Witharana et al (2014) on the necessity of further research to understand the synergies of data fusion and image segmentation.…”
Section: Discussionsupporting
confidence: 92%
“…Therefore, DE algorithm was employed to iterate the size of the block region to obtain the optimized block size and corresponding fusion result adaptively [18]. A weighted evaluation function is proposed to evaluate the maximum amplitude projection (MAP) image of fused 3D data, which combines three quantification parameters, namely information entropy (EN) [19], average gradient (AVG) [20], and structural similarity index (SSIM) [21] as shown in formula (1) =λAVGnormalℒAVG+λENnormalℒEN+λSSIMnormalℒSSIM where normalℒAVG, normalℒEN, and normalℒSSIM are the quantitative evaluations for the MAP image of the fused 3D data. λAVG, λEN, and λSSIM are the pre‐defined weights for the three quantification parameters.…”
Section: Methodsmentioning
confidence: 99%
“…Over the years, the fusing of data from multiple sensors has been a common technique and this trend is bound to continue as the geospatial technologies improve (Roberts et al 2011). According to Chavez et al (1991), one of the reasons for this increase in fusing multiple data-sets is due to the complimentary information of the different data-sets.…”
Section: Research Objectivesmentioning
confidence: 99%