2019 11th International Symposium on Image and Signal Processing and Analysis (ISPA) 2019
DOI: 10.1109/ispa.2019.8868679
|View full text |Cite
|
Sign up to set email alerts
|

Underwater Color Restoration Using U-Net Denoising Autoencoder

Abstract: Visual inspection of underwater structures by vehicles, e.g. remotely operated vehicles (ROVs), plays an important role in scientific, military, and commercial sectors. However, the automatic extraction of information using software tools is hindered by the characteristics of water which degrade the quality of captured videos. As a contribution for restoring the color of underwater images, Underwater Denoising Autoencoder (UDAE) model is developed using a denoising autoencoder with U-Net architecture. The prop… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
23
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 38 publications
(23 citation statements)
references
References 19 publications
0
23
0
Order By: Relevance
“…The authors in [ 47 ] created the UW Denoising Autoencoder (UDAE) model, a deep learning network built on a single denoising autoencoder employing U-Net as a Convolutional Neural Network (CNN) architecture, as a contribution to restoring the colour of UW images. The suggested network considers accuracy and computation cost for real-time implementation on UW visual tasks utilising an end-to-end autoencoder network, resulting in improved UW photography and video content.…”
Section: Various Contributions In Recent Yearsmentioning
confidence: 99%
See 1 more Smart Citation
“…The authors in [ 47 ] created the UW Denoising Autoencoder (UDAE) model, a deep learning network built on a single denoising autoencoder employing U-Net as a Convolutional Neural Network (CNN) architecture, as a contribution to restoring the colour of UW images. The suggested network considers accuracy and computation cost for real-time implementation on UW visual tasks utilising an end-to-end autoencoder network, resulting in improved UW photography and video content.…”
Section: Various Contributions In Recent Yearsmentioning
confidence: 99%
“… One example presented by Hashisho et al ( a ) is the input image, ( b ) the result produced by UGAN and ( c ) the result produced by UDAE. Images from [ 47 ]. …”
Section: Figurementioning
confidence: 99%
“…We validate the proposed loss function with an ablation study and compare Cast-GAN with four state-of-the-art methods: two neural networks, namely U-Net Denoising (U-Net) [8] and Underwater Scene Prior Inspired Image Enhancement (UWCN) [12], and two physics-based methods, namely Depth-dependent Background Light (DBL) [16] and Underwater Haze Line (UWHL) [19]. UWHL and Cast-GAN are the only methods explicitly aiming to remove colour cast.…”
Section: Validationmentioning
confidence: 99%
“…Table 1 shows that, when compared against the original image, Cast-GAN is selected by the participants 60.2% of the times. UWHL is Original DBL [16] U-NET [8] UWCN [12] UWHL [19] Cast-GAN Note how Cast-GAN removes the cast and enhances details, while maintaining realistic colours, especially in images with heavy casts. Since U-Net can only process images of specific resolutions, we select here the image provided by authors, whose resolution is the closest to the original image's aspect ratio.…”
Section: Original Zoommentioning
confidence: 99%
See 1 more Smart Citation