Using an autoencoder for dimensionality reduction, this article presents a novel projection-based reduced-order model for eigenvalue problems.Reduced-order modeling relies on finding suitable basis functions which define a low-dimensional space in which a high-dimensional system is approximated. Proper orthogonal decomposition (POD) and singular value decomposition (SVD) are often used for this purpose and yield an optimal linear subspace. Autoencoders provide a nonlinear alternative to POD/SVD, that may capture, more efficiently, features or patterns in the high-fidelity model results. Reduced-order models based on an autoencoder and a novel hybrid SVD-autoencoder are developed. These methods are compared with the standard POD-Galerkin approach and are applied to two test cases taken from the field of nuclear reactor physics.
Solving the neutron transport equations is a demanding computational challenge. This paper combines reduced-order modelling with domain decomposition to develop an approach that can tackle such problems. The idea is to decompose the domain of a reactor, form basis functions locally in each sub-domain and construct a reduced-order model from this. Several different ways of constructing the basis functions for local sub-domains are proposed, and a comparison is given with a reduced-order model that is formed globally. A relatively simple one-dimensional slab reactor provides a test case with which to investigate the capabilities of the proposed methods. The results show that domain decomposition reduced-order model methods perform comparably with the global reduced-order model when the total number of reduced variables in the system is the same with the potential for the offline computational cost to be significantly less expensive.
Regression modelling has always been a key process in unlocking the relationships between independent and dependent variables that are held within data. In recent years, machine learning has uncovered new insights in many fields, providing predictions to previously unsolved problems. Generative Adversarial Networks (GANs) have been widely applied to image processing producing good results, however, these methods have not often been applied to non-image data. Seeing the powerful generative capabilities of the GANs, we explore their use, here, as a regression method. In particular, we explore the use of the Wasserstein GAN (WGAN) as a multi-output regression method. The resulting method we call Multi-Output Regression GANs (MOR-GANs) and its performance is compared to a Gaussian Process Regression method (GPR)—a commonly used non-parametric regression method that has been well tested on small datasets with noisy responses. The WGAN regression model performs well for all types of datasets and exhibits substantial improvements over the performance of the GPR for certain types of datasets, demonstrating the flexibility of the GAN as a model for regression.
This paper presents a new approach which uses the tools within artificial intelligence (AI) software libraries as an alternative way of solving partial differential equations (PDEs) that have been discretised using standard numerical methods. In particular, we describe how to represent numerical discretisations arising from the finite volume and finite element methods by pre‐determining the weights of convolutional layers within a neural network. As the weights are defined by the discretisation scheme, no training of the network is required and the solutions obtained are identical (accounting for solver tolerances) to those obtained with standard codes often written in Fortran or C++. We also explain how to implement the Jacobi method and a multigrid solver using the functions available in AI libraries. For the latter, we use a U‐Net architecture which is able to represent a sawtooth multigrid method. A benefit of using AI libraries in this way is that one can exploit their built‐in technologies to enable the same code to run on different computer architectures (such as central processing units, graphics processing units or new‐generation AI processors) without any modification. In this article, we apply the proposed approach to eigenvalue problems in reactor physics where neutron transport is described by diffusion theory. For a fuel assembly benchmark, we demonstrate that the solution obtained from our new approach is the same (accounting for solver tolerances) as that obtained from the same discretisation coded in a standard way using Fortran. We then proceed to solve a reactor core benchmark using the new approach. For both benchmarks we give timings for the neural network implementation run on a CPU and a GPU, and a serial Fortran code run on a CPU.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.