2023
DOI: 10.1016/j.jco.2023.101784
|View full text |Cite
|
Sign up to set email alerts
|

Rates of approximation by ReLU shallow neural networks

Tong Mao,
Ding-Xuan Zhou
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(1 citation statement)
references
References 38 publications
0
1
0
Order By: Relevance
“…Regarding activation functions of the hidden neurons, defaults provided by the TensorFlow library were used. That is, for dense networks, ReLU [44] was used, and for RNNs and LSTMs hyperbolic tangent [45] was applied. For activation function of the output layer, ReLU was chosen, because the output of the network (V RMS−Radial ) is always non-negative.…”
Section: Neural Network Architectures and Training Processmentioning
confidence: 99%
“…Regarding activation functions of the hidden neurons, defaults provided by the TensorFlow library were used. That is, for dense networks, ReLU [44] was used, and for RNNs and LSTMs hyperbolic tangent [45] was applied. For activation function of the output layer, ReLU was chosen, because the output of the network (V RMS−Radial ) is always non-negative.…”
Section: Neural Network Architectures and Training Processmentioning
confidence: 99%