Physics-informed neural networks (PINNs), introduced in [1], are effective in solving integerorder partial differential equations (PDEs) based on scattered and noisy data. PINNs employ standard feedforward neural networks (NNs) with the PDEs explicitly encoded into the NN using automatic differentiation, while the sum of the mean-squared PDE-residuals and the mean-squared error in initial/boundary conditions is minimized with respect to the NN parameters. Here we extend PINNs to fractional PINNs (fPINNs) to solve space-time fractional advection-diffusion equations (fractional ADEs), and we study systematically their convergence, hence explaining both of fPINNs and PINNs for first time. Specifically, we demonstrate their accuracy and effectiveness in solving multi-dimensional forward and inverse problems with forcing terms whose values are only known at randomly scattered spatio-temporal coordinates (black-box forcing terms). A novel element of the fPINNs is the hybrid approach that we introduce for constructing the residual in the loss function using both automatic differentiation for the integer-order operators and numerical discretization for the fractional operators. This approach bypasses the difficulties stemming from the fact that automatic differentiation is not applicable to fractional operators because the standard chain rule in integer calculus is not valid in fractional calculus. To discretize the fractional operators, we employ the Grünwald-Letnikov (GL) formula in one-dimensional fractional ADEs and the vector GL formula in conjunction with the directional fractional Laplacian in two-and three-dimensional fractional ADEs. We first consider the one-dimensional fractional Poisson equation and compare the convergence of the fPINNs against the finite difference method (FDM). We present the solution convergence using both the mean L 2 error as well as the standard deviation due to sensitivity to NN parameter initializations. Using different GL formulas we observe first-, second-, and third-order convergence rates for small size of training sets but the error saturates for larger training sets. We explain these results by analyzing the four sources of numerical errors due to discretization, sampling, NN approximation, and optimization. The total error decays monotonically (below 10 −5 for third order GL formula) but it saturates beyond that point due to the optimization error. We also analyze the relative balance between discretization and sampling errors and observe that the sampling size and the number of discretization points (auxiliary points) should be comparable to achieve the highest accuracy. As we increase the depth of the NN up to certain value, the mean error decreases and the standard deviation increases whereas the width has essentially no effect unless its value is either too small or too large. We next consider time-dependent fractional ADEs and compare white-box (WB) and black-box (BB) forcing. We observe that for the WB forcing, our results are similar to the aforementioned cases, however,...
Physics-informed neural networks (PINNs) have recently emerged as an alternative way of solving partial differential equations (PDEs) without the need of building elaborate grids, instead, using a straightforward implementation. In particular, in addition to the deep neural network (DNN) for the solution, a second DNN is considered that represents the residual of the PDE. The residual is then combined with the mismatch in the given data of the solution in order to formulate the loss function. This framework is effective but is lacking uncertainty quantification of the solution due to the inherent randomness in the data or due to the approximation limitations of the DNN architecture. Here, we propose a new method with the objective of endowing the DNN with uncertainty quantification for both sources of uncertainty, i.e., the parametric uncertainty and the approximation uncertainty. We first account for the parametric uncertainty when the parameter in the differential equation is represented as a stochastic process. Multiple DNNs are designed to learn the modal functions of the arbitrary polynomial chaos (aPC) expansion of its solution by using stochastic data from sparse sensors. We can then make predictions from new sensor measurements very efficiently with the trained DNNs. Moreover, we employ dropout to correct the overfitting and also to quantify the uncertainty of DNNs in approximating the modal functions. We then design an active learning strategy based on the dropout uncertainty to place new sensors in the domain in order to improve * Corresponding Author the predictions of DNNs. Several numerical tests are conducted for both the forward and the inverse problems to quantify the effectiveness of PINNs combined with uncertainty quantification. This NN-aPC new paradigm of physics-informed deep learning with uncertainty quantification can be readily applied to other types of stochastic PDEs in multi-dimensions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.