2021
DOI: 10.7759/cureus.18866
|View full text |Cite
|
Sign up to set email alerts
|

Role of Layers and Neurons in Deep Learning With the Rectified Linear Unit

Abstract: Deep learning is used to classify data into several groups based on nonlinear curved surfaces. In this paper, we focus on the theoretical analysis of deep learning using the rectified linear unit (ReLU) activation function. Because layers approximate a nonlinear curved surface, increasing the number of layers improves the approximation accuracy of the curved surface. While neurons perform a layer-by-layer approximation of the most appropriate hyperplanes, increasing their number cannot improve the results obta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 4 publications
0
4
0
Order By: Relevance
“…The encoder (Additional file 1 : Figure S1) is characterized by a series of convolutional and pooling layers that doubles the size of the feature map while reducing the number of channels by half. Each convolution uses a 3 × 3 kernel and it is followed by batch normalization and ReLU activation function [ 16 , 17 ]. The decoder (Additional file 1 : Figure S2) recovers the spatial information back to the image space through a series of upsampling and convolution operations, thus increasing the output resolution.…”
Section: Methodsmentioning
confidence: 99%
“…The encoder (Additional file 1 : Figure S1) is characterized by a series of convolutional and pooling layers that doubles the size of the feature map while reducing the number of channels by half. Each convolution uses a 3 × 3 kernel and it is followed by batch normalization and ReLU activation function [ 16 , 17 ]. The decoder (Additional file 1 : Figure S2) recovers the spatial information back to the image space through a series of upsampling and convolution operations, thus increasing the output resolution.…”
Section: Methodsmentioning
confidence: 99%
“…, respectively, the residual unit function  is composed of a combination of rectified linear unit (ReLU) function [38] and Conv (as illustrated in Figure 9 below), and…”
Section: Residual Unitmentioning
confidence: 99%
“…where the input and output of the k th residual unit are denoted by X (k+1) and X (k+2) , respectively, the residual unit function ϕ is composed of a combination of rectified linear unit (ReLU) function [38] and Conv (as illustrated in Figure 9 below), and θ (k) represents all the parameters of the k th residual unit.…”
Section: Residual Unitmentioning
confidence: 99%
“…As shown in Figure 6, the activation function used for the neurons of each layer is the ReLU [46], but the last one has a linear activation function since the task addressed is a regression task. The activation function φ is the decision-making element that defines the decision boundary in the input space by setting a threshold.…”
Section: Dense Neural Network Modelmentioning
confidence: 99%