2022
DOI: 10.1007/s42452-022-04938-9
|View full text |Cite
|
Sign up to set email alerts
|

Neural network guided adjoint computations in dual weighted residual error estimation

Abstract: In this work, we are concerned with neural network guided goal-oriented a posteriori error estimation and adaptivity using the dual weighted residual method. The primal problem is solved using classical Galerkin finite elements. The adjoint problem is solved in strong form with a feedforward neural network using two or three hidden layers. The main objective of our approach is to explore alternatives for solving the adjoint problem with greater potential of a numerical cost reduction. The proposed algorithm is… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 51 publications
0
3
0
Order By: Relevance
“…For those related to physics-informed neural networks (PINN) and least squares methods; see Sirignano & Spiliopoulos [37, Section 7], Mishra & Molinaro [21,50], Pousin [18, Section 4], Cai, Chen & Liu [22], and Berrone, Canuto & Pintore [51]. Recently, a posteriori error analysis has also been studied, in particular goal-oriented analysis using the dual-weighted residual (DWR) methodology; see, e.g., Roth, Schröder & Wick [52], and Minakowski & Richter [53]. We note that in our current work, while we have in mind the error analysis for neural-control approximations, the abstract analysis presented in Section 2 is essentially an extension of the above-mentioned a priori analysis to a certain class of problems involving a convex and differentiable cost functional.…”
Section: Related Workmentioning
confidence: 99%
“…For those related to physics-informed neural networks (PINN) and least squares methods; see Sirignano & Spiliopoulos [37, Section 7], Mishra & Molinaro [21,50], Pousin [18, Section 4], Cai, Chen & Liu [22], and Berrone, Canuto & Pintore [51]. Recently, a posteriori error analysis has also been studied, in particular goal-oriented analysis using the dual-weighted residual (DWR) methodology; see, e.g., Roth, Schröder & Wick [52], and Minakowski & Richter [53]. We note that in our current work, while we have in mind the error analysis for neural-control approximations, the abstract analysis presented in Section 2 is essentially an extension of the above-mentioned a priori analysis to a certain class of problems involving a convex and differentiable cost functional.…”
Section: Related Workmentioning
confidence: 99%
“…These include the test function choice [7], forward solve [8], adjoint solve [9], derivative recovery procedure [10], error estimation [11], metric/monitor function/sizing field construction step [12,13,14], and the entire mesh adaptation loop [6,15,16].…”
Section: Accelerating Mesh Adaptation Using Neural Networkmentioning
confidence: 99%
“…A similar 'focused' approach is also used in [9], which emulates the adjoint solve procedure. This is done on the base mesh and the data-driven adjoint solution is projected into an enriched space, where error indicators are assembled and thereby used to drive mesh adaptation.…”
Section: Novel Aspects Of the Present Workmentioning
confidence: 99%