2019
DOI: 10.1109/access.2018.2886294
|View full text |Cite
|
Sign up to set email alerts
|

Image Noise Level Estimation for Rice Noise Based on Extended ELM Neural Network Training Algorithm

Abstract: The estimation of image noise level is a critical task for image denoising or super-resolution reconstruction. Mathematical methods like patch-based or model-based methods, suffer from the sensitivity of the selection of homogeneous regions or the selection of a proper statistic model, leading to inaccurate estimation, especially in signal-dependent noise cases, such as Rice noise. Ordinary, fully connected networks often suffer from the over-fitting problem, restricting their usage for realistic images. This … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 35 publications
0
0
0
Order By: Relevance
“…Finally, there is the network output layer, and the output layer is used to output the results. When a suitable network structure is chosen, this neural network can fit any continuous function without error [25][26][27]. The input network layer has 𝑗 network neuron, which represents each of the different input variables; the hidden network layer consists of π‘˜ network neurons; and the output network layer has 𝑖 neurons, which represent each of the different output variables.…”
Section: Extreme Learning Machinementioning
confidence: 99%
“…Finally, there is the network output layer, and the output layer is used to output the results. When a suitable network structure is chosen, this neural network can fit any continuous function without error [25][26][27]. The input network layer has 𝑗 network neuron, which represents each of the different input variables; the hidden network layer consists of π‘˜ network neurons; and the output network layer has 𝑖 neurons, which represent each of the different output variables.…”
Section: Extreme Learning Machinementioning
confidence: 99%