2020
DOI: 10.48550/arxiv.2012.02895
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Local Extreme Learning Machines and Domain Decomposition for Solving Linear and Nonlinear Partial Differential Equations

Suchuan Dong,
Zongwei Li

Abstract: We present a neural network-based method for solving linear and nonlinear partial differential equations, by combining the ideas of extreme learning machines (ELM), domain decomposition and local neural networks. The field solution on each sub-domain is represented by a local feed-forward neural network, and C k continuity with an appropriate integer k is imposed on the sub-domain boundaries. Each local neural network consists of a small number (one or more) of hidden layers, while its last hidden layer can be… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

5
140
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
1

Relationship

3
3

Authors

Journals

citations
Cited by 7 publications
(145 citation statements)
references
References 40 publications
5
140
0
Order By: Relevance
“…Their weakness lies in the limited accuracy and the high computational cost (long network-training time). Another promising class of neural network-based methods for computational PDEs has recently appeared [13,14,18,21,17], which are based on a type of randomized neural networks called extreme learning machines (ELM) [29,30]. With these methods the weight/bias coefficients in the hidden layers of the neural network are set to random values and are fixed.…”
Section: Introductionmentioning
confidence: 99%
See 4 more Smart Citations
“…Their weakness lies in the limited accuracy and the high computational cost (long network-training time). Another promising class of neural network-based methods for computational PDEs has recently appeared [13,14,18,21,17], which are based on a type of randomized neural networks called extreme learning machines (ELM) [29,30]. With these methods the weight/bias coefficients in the hidden layers of the neural network are set to random values and are fixed.…”
Section: Introductionmentioning
confidence: 99%
“…With these methods the weight/bias coefficients in the hidden layers of the neural network are set to random values and are fixed. Only the coefficients of the linear output layer are trainable, and they are trained by a linear least squares method for linear PDEs and by a nonlinear least squares method for nonlinear PDEs [13]. It has been shown in [13] that the accuracy and the computational cost (network training time) of the ELM-based method are considerably superior to those of the aforementioned DNN-based PDE solvers.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations