2022
DOI: 10.48550/arxiv.2206.02748
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Compound Multi-branch Feature Fusion for Real Image Restoration

Abstract: Image restoration is a challenging and ill-posed problem which also has been a long-standing issue. However, most of learning based restoration methods are proposed to target one degradation type which means they are lack of generalization. In this paper, we proposed a multi-branch restoration model inspired from the Human Visual System (i.e., Retinal Ganglion Cells) which can achieve multiple restoration tasks in a general framework. The experiments show that the proposed multi-branch architecture, called CMF… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 36 publications
(58 reference statements)
0
4
0
Order By: Relevance
“…Currently, the mean squared error ( MSE ) loss function [ 13 ] and the mean absolute error (MAE) loss function [ 28 ] are the predominant loss functions that measure the error between corresponding pixels in the field of computer vision. The MSE is susceptible to outliers, resulting in over-constraint [ 29 ], whereas MAE lacks gradient constraints [ 30 ], resulting in weak model convergence. Some studies have proposed a structural similarity index ( SSIM ) loss function based on human visual perception [ 31 ], which optimizes the model according to visual sensory direction; however, the illuminance and color restoration results are unsatisfactory.…”
Section: Proposed Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Currently, the mean squared error ( MSE ) loss function [ 13 ] and the mean absolute error (MAE) loss function [ 28 ] are the predominant loss functions that measure the error between corresponding pixels in the field of computer vision. The MSE is susceptible to outliers, resulting in over-constraint [ 29 ], whereas MAE lacks gradient constraints [ 30 ], resulting in weak model convergence. Some studies have proposed a structural similarity index ( SSIM ) loss function based on human visual perception [ 31 ], which optimizes the model according to visual sensory direction; however, the illuminance and color restoration results are unsatisfactory.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…In order to constrain the training process, accelerate the convergence speed of the model, and improve the visual quality of the enhanced image, we take into account the characteristics of the aforementioned loss functions. The weighted part of the proposed loss function (Fan et al 2022) [32] is removed, and that which consists of the quotient of PSNR and SSIM to predict the error between low-light images normal images is used, according to the following equation:…”
Section: Loss Functionmentioning
confidence: 99%
“…From the subjective visual point of view, because of the limitations of the traditional algorithm of DCP based on the dark channel theory, there is a vast deviation in removing the dense fog from the image. It can be seen that the image (shown in Figure 7(b)) after the de-fogging by the DCP algorithm has a severe problem [11] 13.84 0.425 AOD [13] 13.14 0.414 GCANet [15] 14.25 0.497 CMFNet [37] 14.46 0.533 Ours 15.01 0.531 Even in outdoor scenarios with dense fog, the method proposed in this paper offers superior defogging capabilities. There is the least amount of fog compared to other advanced algorithms, the authentic image is visible, and the subjective visual advantage is evident.…”
Section: Experimental Analysis Of Synthetic Data Setsmentioning
confidence: 96%
“…The mean value of SSIM increases from the best of 0.971dB to 0.976dB; the mean value of PSNR increases from the best of 31.38dB to 32.96dB, and the relative increase of PSNR value is 1.58dB. [11] 20.64 0.800 AOD [13] 19.16 0.850 GCANet [15] 30.23 0.975 CMFNet [37] 31.38 0.971 Ours 32.96 0.976…”
Section: Experimental Analysis Of Synthetic Data Setsmentioning
confidence: 99%