A novel solve-training framework is proposed to train neural network in representing low dimensional solution maps of physical models. Solve-training framework uses the neural network as the ansatz of the solution map and train the network variationally via loss functions from the underlying physical models. Solve-training framework avoids expensive data preparation in the traditional supervised training procedure, which prepares labels for input data, and still achieves effective representation of the solution map adapted to the input data distribution. The efficiency of solve-training framework is demonstrated through obtaining solutions maps for linear and nonlinear elliptic equations, and maps from potentials to ground states of linear and nonlinear Schrödinger equations. without explicit expression for the underlying model. Benefiting from such a difference, we design loss functions based on the PDEs, in another word, we adopt the model information into the loss functions, and solve the solution map directly without knowing solution functions.
Related workA number of recent work utilized NNs to address physical models. Generally, they can be organized into three groups: representing solutions via NNs, representing solution maps via NNs, and optimizing traditional iterative solvers via NNs. Representing solutions of physical models, especially high-dimensional ones, has been a long-standing computational challenge. NN with multiple input and single output can be used as an ansatz for the solutions of physical models or PDEs, which is first explored in [29] for low-dimensional solutions. Many high-dimensional problems, e.g., interacting spin models, high-dimensional committor functions, etc., have been recently considered for solutions using NN ansatz with variants optimization strategies [5,7,9,19,26,37,36]. NN, in this case, is valuable in its flexibility and richness in representing high-dimensional functions.Representing the solution map of a nonlinear problem is challenging as well. For linear problems, the solution map can be represented by a simple matrix (i.e., Green's function for PDE problems). While the efficient representation for solution map is unknown for most nonlinear problems. Traditional methods in turn solve nonlinear problem via iterative methods, e.g., fixed point iteration. Since NN is able to represent high dimensional nonlinear mappings, it has also been explored in recent literature to represent solution maps of low-dimensional problems on mesh grid, see e.g, [10,11,12,20,25,27,30,34,38,39]. These NNs are fitted by a set of training data with solution ready, i.e., labeled data. Most work from the first two groups focus on creative design of NN architectures, in particular trying to incorporate knowledge of the PDE into the representation.The last group, very different from previous two, adopts NN to optimize traditional iterative methods [14,23,24,35]. Once the iterative methods are optimized on a set of problems, generalization to different boundary conditions, domain geometries, and...