Physics-Informed Neural Networks (PINNs) have emerged recently as a promising application of deep neural networks to the numerical solution of nonlinear partial differential equations (PDEs). However, the original PINN algorithm is known to suffer from stability and accuracy problems in cases where the solution has sharp spatio-temporal transitions. These "stiff" PDEs require an unreasonably large number of collocation points to be solved accurately. It has been recognized that adaptive procedures are needed to force the neural network to fit accurately the stubborn spots in the solution of stiff PDEs. To accomplish this, previous approaches have used fixed weights hard-coded over regions of the solution deemed to be important. In this paper, we propose a fundamentally new method to train PINNs adaptively, where the adaptation weights are fully trainable, so the neural network learns by itself which regions of the solution are difficult and is forced to focus on them, which is reminiscent of soft multiplicative-mask attention mechanism used in computer vision. The basic idea behind these Self-Adaptive PINNs is to make the weights increase where the corresponding loss is higher, which is accomplished by training the network to simultaneously minimize the losses and maximize the weights, i.e., to find a saddle point in the cost surface. We show that this is formally equivalent to solving a PDE-constrained optimization problem using a penaltybased method, though in a way where the monotonically-nondecreasing penalty coefficients are trainable. Numerical experiments with an Allen-Cahn "stiff" PDE, the Self-Adaptive PINN outperformed other state-of-the-art PINN algorithms in L2 error by a wide margin, while using a smaller number of training epochs. An Appendix contains additional results with Burger's and Helmholtz PDEs, which confirmed the trends observed in the Allen-Cahn experiments.
Uncovering links between processing conditions, microstructure, and properties is a central tenet of materials analysis. It is well known that microstructure determines properties, but expressing these structural features in a universal quantitative fashion has proved to be extremely difficult.Recent efforts have focused on training supervised learning algorithms to place microstructure images into predefined classes, but this approach assumes a level of a priori knowledge that may not always be available. This work expands this idea to the semi-supervised context in which class labels are known with confidence for only a fraction of the microstructures that represent the material system. It is shown that classifiers which perform well on both the high-confidence labeled data and the unlabeled, ambiguous data can be constructed by relying on the labeling consensus of a collection of semi-supervised learning methods. We also demonstrate the use of novel error estimation approaches for unlabeled data to establish robust confidence bounds on the classification performance over the entire microstructure space.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.