The variational multiscale (VMS) formulation formally segregates the evolution of the coarse-scales from the finescales. VMS modeling requires the approximation of the impact of the fine scales in terms of the coarse scales. For the purpose of this approximation, this work introduces a VMS framework with a special neural-network (N-N) structure, which we call the variational super-resolution N-N (VSRNN). The VSRNN constructs a super-resolved model of the unresolved scales as a sum of the products of individual functions of coarse scales and physics-informed parameters.Combined with a set of locally non-dimensional features obtained by normalizing the input coarse-scale and output sub-scale basis coefficients, the VSRNN provides a general framework for the discovery of closures for both the continuous and the discontinuous Galerkin discretizations. By training this model on a sequence of L 2 −projected data and using the super-resolved state to compute the discontinuous Galerkin fluxes, we improve the optimality and the accuracy of the method for both the linear advection problem and turbulent channel flow. Finally, we demonstrate that -in the investigated examples -that the present model allows generalization to out-of-sample initial conditions and Reynolds numbers. Perspectives are provided on data-driven closure modeling, limitations of the present approach, and opportunities for improvement.
The numerical simulations of physical systems are heavily dependent on meshbased models. While neural networks have been extensively explored to assist such tasks, they often ignore the interactions or hierarchical relations between input features, and process them as concatenated mixtures. In this work, we generalize the idea of conditional parametrization -using trainable functions of input parameters to generate the weights of a neural network, and extend them in a flexible way to encode information critical to the numerical simulations. Inspired by discretized numerical methods, choices of the parameters include physical quantities and mesh topology features. The functional relation between the modeled features and the parameters are built into the network architecture. The method is implemented on different networks, which are applied to several frontier scientific machine learning tasks, including the discovery of unmodeled physics, superresolution of coarse fields, and the simulation of unsteady flows with chemical reactions. The results show that the conditionally parameterized networks provide superior performance compared to their traditional counterparts. A network architecture named CP-GNet is also proposed as the first deep learning model capable of standalone prediction of reacting flows on irregular meshes.Preprint. Under review.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.