2022
DOI: 10.1016/j.compeleceng.2022.107822
|View full text |Cite
|
Sign up to set email alerts
|

Deep retinex decomposition network for underwater image enhancement

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 24 publications
(6 citation statements)
references
References 20 publications
0
6
0
Order By: Relevance
“…Xu et al 87 employed a retinex method in their learning method and presented a deep-retinexdecomposition network for enhancing underwater images. The authors focused on improving the color, contrast, and brightness and estimated the illumination using a CNN as the baseline model.…”
Section: Learning-based Methodsmentioning
confidence: 99%
“…Xu et al 87 employed a retinex method in their learning method and presented a deep-retinexdecomposition network for enhancing underwater images. The authors focused on improving the color, contrast, and brightness and estimated the illumination using a CNN as the baseline model.…”
Section: Learning-based Methodsmentioning
confidence: 99%
“…The image becomes more in line with the human subjective perception [16]. Underwater image enhancement algorithms can be divided into 2 categories according to the domain of action: the former focuses on the improvement of image pixels, and the latter transforms the image into the frequency domain through the null-frequency transform, where some special properties of the image in the frequency domain are manipulated [17][18][19].…”
Section: Underwater Image Enhancementmentioning
confidence: 99%
“…By including the unsettling V channel image component in the HSV color space, the component was converted to a reflection component using a DL network ( Jiang Z. et al, 2021 ). Owing to their significant worth, their Retinex and DL-based methods were applied in image dehazing and underwater image enhancement ( Xu et al, 2022 ; Shen et al, 2023 ).…”
Section: Introductionmentioning
confidence: 99%