2020
DOI: 10.3390/ai1040029
|View full text |Cite
|
Sign up to set email alerts
|

Comparing U-Net Based Models for Denoising Color Images

Abstract: Digital images often become corrupted by undesirable noise during the process of acquisition, compression, storage, and transmission. Although the kinds of digital noise are varied, current denoising studies focus on denoising only a single and specific kind of noise using a devoted deep-learning model. Lack of generalization is a major limitation of these models. They cannot be extended to filter image noises other than those for which they are designed. This study deals with the design and training of a gene… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 30 publications
(12 citation statements)
references
References 61 publications
0
12
0
Order By: Relevance
“…Therefore, it effectively suppresses the noise components and preserves the structural details in the LDCT images [ 60 ]. After that, the expansion path constructs the feature-enhanced noise-reduced images across the upsampling layers [ 61 ]. According to this information, it can be stated that the selection of the U-net-based Generator is ideal for the proposed GAN model.…”
Section: Discussionmentioning
confidence: 99%
“…Therefore, it effectively suppresses the noise components and preserves the structural details in the LDCT images [ 60 ]. After that, the expansion path constructs the feature-enhanced noise-reduced images across the upsampling layers [ 61 ]. According to this information, it can be stated that the selection of the U-net-based Generator is ideal for the proposed GAN model.…”
Section: Discussionmentioning
confidence: 99%
“…• An encoder-decoder (Enc-Dec) consisting in 4 convolutional layers for encoding, followed by 2 fully connected layers interposed with dropout layers (dropout rate tuned to reach optimum at 0.1), after which the model splits into two branches made of three convolutional layers each, for optimization of the two tasks. • An UNet model [31][32][33]47] adapted to the current tasks: a combination of max pooling, convolutional and fully connected layers for a total of 12 layers and 3 skip connections. Skip connections concatenate high resolution features produced by encoder to upsampled features of decoder to enable precise segmentation.…”
Section: Cnns Architecturementioning
confidence: 99%
“…Indeed these two tasks are exemplar of the two main issues in the field of image evaluation: detectability preservation with increasing noise and areas identification in low contrast environment. All experiments were based on the adoption of two CNN architectures with different complexity: a standard encoder-decoder and its extension, the more powerful UNet model, largely applied in the field of medical imaging for segmentation and denoising tasks [31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47]. The choice of using these two models was guided by the intention to study how the increasing computational power affects the structure of information in the processed images.…”
Section: Introductionmentioning
confidence: 99%
“…The saturation of the performance with the depth of the architecture is a known effect in deep learning (Glorot & Bengio, 2010;Priyanka & Wang, 2019). It is the reason for the development of architectural/training tricks to keep the performance high in validation when increasing the number of free parameters (Komatsu & Gonsalves, 2020;He, Zhang, Ren, & Sun, 2016;Bengio, Lamblin, Popovici, & Larochelle, 2006). We didn't explore these specific tricks because of two reasons: (1) the aim of the paper is not pure goal optimization but studying the eventual emergence of human behavior in the networks, and (2) the considered cases already show improvement of the performance from the shallower configurations to deeper configurations and the posterior decay in performance is small (and negligible in visual terms in most of the cases).…”
Section: Architectures and Performancementioning
confidence: 99%