This paper focuses on how to approximate traveling wave solutions for various kinds of partial differential equations via artificial neural networks. A traveling wave solution is hard to obtain with traditional numerical methods when the corresponding wave speed is unknown in advance. We propose a novel method to approximate both the traveling wave solution and the unknown wave speed via a neural network and an additional free parameter. We proved that under a mild assumption, the neural network solution converges to the analytic solution and the free parameter accurately approximates the wave speed as the corresponding loss tends to zero for the Keller-Segel equation. We also demonstrate in the experiments that reducing loss through training assures an accurate approximation of the traveling wave solution and the wave speed for the Keller-Segel equation, the Allen-Cahn model with relaxation, and the Lotka-Volterra competition model.
<abstract><p>In this paper, we approximate traveling wave solutions via artificial neural networks. Finding traveling wave solutions can be interpreted as a forward-inverse problem that solves a differential equation without knowing the exact speed. In general, we require additional restrictions to ensure the uniqueness of traveling wave solutions that satisfy boundary and initial conditions. This paper is based on the theoretical results that the bistable three-species competition system has a unique traveling wave solution on the premise of the monotonicity of the solution. Since the original monotonic neural networks are not smooth functions, they are not suitable for representing solutions of differential equations. We propose a method of approximating a monotone solution via a neural network representing a primitive function of another positive function. In the numerical integration, the operator learning-based neural network resolved the issue of differentiability by replacing the quadrature rule. We also provide theoretical results that a small training loss implies a convergence to a real solution. The set of functions neural networks can represent is dense in the solution space, so the results suggest the convergence of neural networks with appropriate training. We validate that the proposed method works successfully for the cases where the wave speed is identical to zero. Our monotonic neural network achieves a small error, suggesting that an accurate speed and solution can be estimated when the sign of wave speed is known.</p></abstract>
This paper focuses on how to approximate traveling wave solutions for various kinds of partial differential equations via artificial neural networks. A traveling wave solution is hard to obtain with traditional numerical methods when the corresponding wave speed is unknown in advance. We propose a novel method to approximate both the traveling wave solution and the unknown wave speed via a neural network and an additional free parameter. We proved that under a mild assumption, the neural network solution converges to the analytic solution and the free parameter accurately approximates the wave speed as the corresponding loss tends to zero for the Keller-Segel equation. We also demonstrate in the experiments that reducing loss through training assures an accurate approximation of the traveling wave solution and the wave speed for the Keller-Segel equation, the Allen-Cahn model with relaxation, and the Lotka-Volterra competition model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.