2021 International Joint Conference on Neural Networks (IJCNN) 2021
DOI: 10.1109/ijcnn52387.2021.9533606
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Multi-task Learning Enhanced Physics-informed Neural Networks for Solving Partial Differential Equations

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 14 publications
(8 citation statements)
references
References 15 publications
0
7
0
Order By: Relevance
“…The parameters of both networks are concurrently updated with the expectancy that the preselector network distills the hidden PDE function N ξ , and informs physics back to the solver. MT is a function that reasonably manipulates learning by multiple losses, such as Uncert [33] and PCGrad [34], which are shown to accelerate the PINN generalized performance [16]. Algorithm 1 describes a relaxed approach that numerically minimizes the loss in equation ( 8) until detected plateau; then, converging the solver network independently.…”
Section: Derivative Preparationmentioning
confidence: 99%
See 1 more Smart Citation
“…The parameters of both networks are concurrently updated with the expectancy that the preselector network distills the hidden PDE function N ξ , and informs physics back to the solver. MT is a function that reasonably manipulates learning by multiple losses, such as Uncert [33] and PCGrad [34], which are shown to accelerate the PINN generalized performance [16]. Algorithm 1 describes a relaxed approach that numerically minimizes the loss in equation ( 8) until detected plateau; then, converging the solver network independently.…”
Section: Derivative Preparationmentioning
confidence: 99%
“…Remark that if the candidate library is incomplete (we miss including some necessary candidates), the benefit of finetuning the discovered coefficients becomes less because of the faulty form of governing PDE used. Also, without an appropriate initialization of targeting PDE coefficients, training a physics-informed neural network (PINN) [9] may be a task that could be developed further by, for example, multi-task learning [16] or sinusoidal feature mapping [17], even though the actual governing function is presumably known beforehand.…”
Section: Introductionmentioning
confidence: 99%
“…The broad utility of PINNs is revealed by their numerous applications to problems in aerodynamics [16], surface physics [17], power systems [18], cardiology [19], and soft biological tissues [20]. PINNs have also been integrated into the multi-task learning [21] and meta-learning [22] frameworks. When implementing PINN algorithms to find functions in an unbounded system, the unbounded variables cannot be simply normalized, precluding reconstruction of solutions outside the range of data.…”
Section: Introductionmentioning
confidence: 99%
“…However, due to inaccurate coefficient optimization, this method may recognise errors. Thanasutives et al proposed the noise-aware physics-informed machine learning framework [33] to train PINN based on discrete Fourier transform in a multitasking learning paradigm [34] to reveal a set of optimal reduced partial differential equations. PDE-Read [35] introduces a variant of the two-network and a Recursive Feature Elimination algorithm to identify human readable PDEs from data.…”
Section: Introductionmentioning
confidence: 99%