2022
DOI: 10.1016/j.cma.2022.114909
|View full text |Cite
|
Sign up to set email alerts
|

CAN-PINN: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
33
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 124 publications
(33 citation statements)
references
References 49 publications
0
33
0
Order By: Relevance
“…Further research can look into alternative or hybrid methods of differentiating the differential equations. To speed up PINN training, the loss function in Chiu et al [33] is defined using numerical differentiation and automatic differentiation. The proposed can-PINN, i.e.…”
Section: Improving Implementation Aspects In Pinnmentioning
confidence: 99%
“…Further research can look into alternative or hybrid methods of differentiating the differential equations. To speed up PINN training, the loss function in Chiu et al [33] is defined using numerical differentiation and automatic differentiation. The proposed can-PINN, i.e.…”
Section: Improving Implementation Aspects In Pinnmentioning
confidence: 99%
“…Wide varieties of PINNs have been found in the literature in recent times. can-PINNs [15] link derivative terms with nearby support points, which generally apply to Taylor series expansion-based numerical systems. Apart from demonstrating good dispersion and dissipation characteristics, they are highly trainable and require four to sixteen times fewer collocation points than original PINNs.…”
Section: Literature Review a Physics Informed Neural Networkmentioning
confidence: 99%
“…While both methods have their pros and cons, ND-loss can be flexibly implemented across many different neural network (NN) architectures, including both multi-layer perceptrons (MLPs) and convolutional neural networks (CNNs), because they do not require the NN to retain differentiability, unlike AD. Recent studies [4]- [6] have suggested that ND-type methods and especially coupledautomatic-numerical differentiation (CAN)-loss [6] can more robustly and efficiently produce accurate solutions with fewer training samples, whereas conventional AD-loss is prone to failure during training. This is because ND-type methods approximate high order derivatives using PINN output from neighbouring samples, hence, they can effectively connect sparse samples into piecewise regions via these local approximations, thereby facilitating fast physics-informed learning across the entire domain with sparser samples.…”
Section: Introductionmentioning
confidence: 99%