2020
DOI: 10.1007/s00366-020-00985-1
|View full text |Cite
|
Sign up to set email alerts
|

Optimization free neural network approach for solving ordinary and partial differential equations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
12
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 46 publications
(12 citation statements)
references
References 20 publications
0
12
0
Order By: Relevance
“…The use of ELM for solving linear partial differential equations has been discussed in a number of previous works; see e.g. [31,10,6] and the references therein. For the sake of completeness, we summarize the main procedure below, and we refer the reader to e.g.…”
Section: Solving Linear Differential Equations With Combined Elm/modbipmentioning
confidence: 99%
“…The use of ELM for solving linear partial differential equations has been discussed in a number of previous works; see e.g. [31,10,6] and the references therein. For the sake of completeness, we summarize the main procedure below, and we refer the reader to e.g.…”
Section: Solving Linear Differential Equations With Combined Elm/modbipmentioning
confidence: 99%
“…[33,20,30,21,47]), which can be traced to Turing's unorganized machine and Rosenblatt's perceptron [42,35] and have witnessed a revival in neuro-computations in recent years. The application of ELM to function approximations and linear differential equations have been considered in several recent works [1,45,40,32,29,10]. Domain decomposition has found widespread applications in classical numerical methods [39,41,4,8,5].…”
Section: Introductionmentioning
confidence: 99%
“…In ELM one assigns random values to and fixes the hidden-layer coefficients, and only allows the output-layer (assumed to be linear) coefficients to be trainable. For linear problems, the resultant system becomes linear with respect to the output-layer coefficients, which can then be determined by a linear least squares method [26,41,14,10,11,4]. Randomweight neural networks similarly possess a universal approximation property.…”
mentioning
confidence: 99%