2022
DOI: 10.1038/s43586-022-00125-7
|View full text |Cite
|
Sign up to set email alerts
|

Piecewise linear neural networks and deep learning

Abstract: For the published version, please access https://rdcu.be/cPIGw for online-reading, download the manuscript from https://www.nature.com/articles/s43586-022-00125-7#citeas, and check the supplementary information via https://www.nature.com/articles/s43586-022-00125-7#Sec38.

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(4 citation statements)
references
References 133 publications
0
4
0
Order By: Relevance
“…As a branch of machine learning, deep learning is highly capable of modeling nonlinearities with high flexibility and performs much better in dealing with realistic and complex problems. Therefore, deep learning has been introduced into bearing fault diagnosis to obtain a higher correct rate of fault diagnosis in complex environments 12 , 13 .…”
Section: Related Workmentioning
confidence: 99%
“…As a branch of machine learning, deep learning is highly capable of modeling nonlinearities with high flexibility and performs much better in dealing with realistic and complex problems. Therefore, deep learning has been introduced into bearing fault diagnosis to obtain a higher correct rate of fault diagnosis in complex environments 12 , 13 .…”
Section: Related Workmentioning
confidence: 99%
“…This encompasses continuous and non-polynomial activation for shallow neural networks [2] and non-affine, constant, and continuously differentiable activation with nonzero derivatives for deep neural networks [3]. Consequently, neural networks utilizing the Rectified Linear Unit (ReLU) activation function have demonstrated efficacy in resolving a broad range of machine learning research problems, their solution curves corresponding to a piecewise-linear approximation [4,5].…”
Section: Introductionmentioning
confidence: 99%
“…While CS has improved the performance of CGI through its image reconstruction algorithms, its application is constrained by strong sparsity assumptions and limitations in the reconstruction process [32][33][34]. The emergence of Deep Learning (DL) [35][36][37][38][39] presents an opportunity to relax sparsity constraints by recovering images at ultra-low sampling rates (SR) using trained data and untrained strategies [40][41][42][43]. Concurrently, ToF-based novel computational 3D imaging techniques show great promise across various imaging domains [44][45][46][47].…”
Section: Introductionmentioning
confidence: 99%