2007
DOI: 10.1016/j.inffus.2005.10.001
|View full text |Cite
|
Sign up to set email alerts
|

A human perception inspired quality metric for image fusion based on regional information

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
80
0
1

Year Published

2010
2010
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 218 publications
(81 citation statements)
references
References 13 publications
0
80
0
1
Order By: Relevance
“…However, this usually means time consuming and often expensive experiments involving a large number of human subjects. In recent years, a number of computational image fusion quality assessment metrics have therefore been proposed [2,3,[5][6][7][12][13][14]36,42,44,46,49,[52][53][54][55]. Although some of these metrics agree with human visual perception to some extent, most of them cannot predict observer performance for different input imagery and scenarios.…”
Section: Introductionmentioning
confidence: 99%
“…However, this usually means time consuming and often expensive experiments involving a large number of human subjects. In recent years, a number of computational image fusion quality assessment metrics have therefore been proposed [2,3,[5][6][7][12][13][14]36,42,44,46,49,[52][53][54][55]. Although some of these metrics agree with human visual perception to some extent, most of them cannot predict observer performance for different input imagery and scenarios.…”
Section: Introductionmentioning
confidence: 99%
“…Other methods for assessing fusion quality have been proposed (Liu et al, 2008;Chen and Varshney, 2007;Zheng & Chin;Zheng et al, 2008;Chen & Blum, 2009;Wang et al, 2008). Liu et al (2008) proposed two metrics based on a modified structural similarity measure (FSSIM) scheme and the local cross-correlation between the feature maps of the fused and input images.…”
Section: Gradementioning
confidence: 99%
“…These metrics provide an objective quality measure in the absence of a reference image. Chen & Varshney (2007) proposed a new quality metric for image fusion that does not require a reference image. It is based on local information given by a set of localized windows and by the difference in the frequency domain filtered by a contrast sensitivity function.…”
Section: Gradementioning
confidence: 99%
“…In recent years, a number of computational image fusion quality assessment metrics have therefore been proposed (e.g. Angell, 2005;Blum, 2006;Chari et al, 2005;Chen & Varshney, 2005;Chen & Varshney, 2007;Corsini et al, 2006;Cvejic et al, 2005a;Cvejic et al, 2005b;Piella & Heijmans, 2003;Toet & Hogervorst, 2003;Tsagiris & Anastassopoulos, 2004;Ulug & Claire, 2000;Wang & Shen, 2006;Xydeas & Petrovic, 2000;Yang et al, 2007;Zheng et al, 2007;Zhu & Jia, 2005). Although some of these metrics agree with human visual perception to some extent, most of them cannot predict observer performance for different input imagery and scenarios.…”
Section: The Need For Image Fusion Quality Metricsmentioning
confidence: 99%