“…Furthermore, there also exhibit multiple ways to express the physical constraints or governing equations for both forward and inverse problems including the collocation-based loss function [Dissanayake and Phan-Thien, 1994, Lagaris et al, 1998, Sirignano and Spiliopoulos, 2018, Raissi, 2018, Raissi et al, 2019, Xu and Darve, 2019, which evaluate the solution at selected collocation points, and the energy-based (Ritz-Galerkin) method that requires numerical integrations but also reduces the order of the derivatives in the governing equations, [Weinan andYu, 2018, Samaniego et al, 2020], and the related variational approach Kharazmi et al [2021] that parametrizes trial and test spaces by neural network and polynomials, respectively. This vast number of choices is further complicated by the large number of tunable hyperparameters, such as the configurations of the neural network [Fuchs et al, 2021, Psaros et al, 2021, the types of activation functions [Psaros et al, 2021] and the neuron weight initialization [Glorot and Bengio, 2010, He et al, 2015, Goodfellow et al, 2016, Cyr et al, 2020, and different techniques to impose boundary conditions [Sukumar and Srivastava, 2021] , while providing significant flexibility, also could make it confusing for researchers unfamiliar with neural network to determine the optimal way to train the PINN properly and efficiently. Finally, the recent work on meta-learning and analysis on the enforcement of boundary conditions also reveals that the multiple physical constraints employed to measure and minimize the error of the approximated solution may lead to conflicting situations where actions that reduce one set of constraints may also increase the error of another set of constraints and lead to a comprised local minimizer that is not desirable [Psaros et al, 2021, Yu et al, 2020, Rohrhofer et al, 2021.…”