2021
DOI: 10.1016/j.cma.2021.114188
|View full text |Cite
|
Sign up to set email alerts
|

Extreme learning machine collocation for the numerical solution of elliptic PDEs with sharp gradients

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
21
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3

Relationship

3
6

Authors

Journals

citations
Cited by 45 publications
(21 citation statements)
references
References 39 publications
0
21
0
Order By: Relevance
“…Although it has been demonstrated that neural networks are universal function approximators, the proper choice of architecture generally aids learning and certain challenging problems (e.g. solving non-linear PDEs) may require more specific architectures to capture all their properties (see for instance Extreme Learning Machine 61 , 62 ). For that reason, we have proposed a new architecture, inspired by 51 , to solve non-linear PDEs with discontinuities under two assumptions.…”
Section: Piann Architecturementioning
confidence: 99%
“…Although it has been demonstrated that neural networks are universal function approximators, the proper choice of architecture generally aids learning and certain challenging problems (e.g. solving non-linear PDEs) may require more specific architectures to capture all their properties (see for instance Extreme Learning Machine 61 , 62 ). For that reason, we have proposed a new architecture, inspired by 51 , to solve non-linear PDEs with discontinuities under two assumptions.…”
Section: Piann Architecturementioning
confidence: 99%
“…that of identifying/discovering the hidden macroscopic laws, thus learning nonlinear operators and constructing coarse-scale dynamical models of ODEs and PDEs and their closures, from microscopic large-scale simulations and/or from multi-fidelity observations [10,57,58,59,62,9,3,47,74,15,16,48]. Second, based on the constructed coarse-scale models, to systematically investigate their dynamics by efficiently solving the corresponding differential equations, especially when dealing with (high-dimensional) PDEs [24,13,15,16,22,23,38,49,59,63]. Towards this aim, physics-informed machine learning [57,58,59,48,53,15,16,40] has been addressed to integrate available/incomplete information from the underlying physics, thus relaxing the "curse of dimensionality".…”
Section: Introductionmentioning
confidence: 99%
“…A number of further developments of the ELM technique for solving linear and nonlinear PDEs appeared recently; see e.g. [10,6,19,12,17], among others. In order to address the influence of random initialization of the hidden-layer coefficients on the ELM accuracy, a modified batch intrinsic plasticity (modBIP) method is developed in [10] for pre-training the random coefficients in the ELM network.…”
Section: Introductionmentioning
confidence: 99%
“…The accuracy of the combined modBIP/ELM method has been shown to be insensitive to the random initializations of the hidden-layer coefficients. In [6] the authors present a method for solving one-dimensional linear elliptic PDEs based on ELM with single hidden-layer feedforward neural networks and the sigmoid activation function. The random parameters in the activation function are set based on the location of the domain of interest and the function derivative information.…”
Section: Introductionmentioning
confidence: 99%