A crucial step for optimizing a system is the formulation of the objective function, and part of it concerns the selection of the design parameters. One of the major concerns regarding the parameterization of the objective function is the trade-off between exploring feasible solutions in the design space and maintaining admissible computational effort. In order to achieve such balance in optimization problems with Computer Aided Engineering (CAE) models, the conventional constructive geometric representations are substituted by deformation methods, e.g. free form deformation, where the position of a few control points might be capable of handling large scale shape modifications. However, in light of the recent developments in the field of geometric deep learning architectures, autoencoders have risen as a promising alternative for efficiently condensing high-dimensional models into compact representations. Hence, in this paper we present a novel perspective on geometric deep learning models, by exploring the applicability of the latent space of a Point Cloud Autoencoder (PC-AE) in shape optimization problems with evolutionary algorithms. Focusing on engineering applications, a target shape matching optimization is used as a surrogate problem for computationally expensive CAE simulations. Through the evaluation of the quality of the solutions achieved in the optimization and further aspects, such as shape feasibility, PC-AE models have shown to be consistent and suitable geometric representations for such problems, adding a new perspective on the approaches for handling high-dimensional models to optimization tasks.
Geometric Deep Learning (GDL) methods have recently gained interest as powerful, high-dimensional models for approaching various geometry processing tasks. However, training deep neural network models on geometric input requires considerable computational effort, even more so if one considers typical problem sizes found in application domains such as engineering tasks, where geometric data are often orders of magnitude larger than the inputs currently considered in GDL literature. Hence, an assessment of the scalability of the training task is necessary, where model and data set parameters can be mapped to the computational demand during training. The present paper therefore studies the effects of data set size and the number of free model parameters on the computational effort of training a Point Cloud Autoencoder (PC-AE). We further review pre-processing techniques to obtain efficient representations of high-dimensional inputs to the PC-AE and investigate the effects of these techniques on the information abstracted by the trained model. We perform these experiments on synthetic geometric data inspired by engineering applications using computing hardware with particularly recent graphics processing units (GPUs) with high memory specifications. The present study thus provides a comprehensive evaluation of how to scale geometric deep learning architectures to high-dimensional inputs to allow for an application of state-of-the-art deep learning methods in realworld tasks.
The choice of design representations, as of search operators, is central to the performance of evolutionary optimization algorithms, in particular for multi-task problems. The multi-task approach pushes further the parallelization aspect of these algorithms by solving simultaneously multiple optimization tasks using a single population. During the search, the operators implicitly transfer knowledge between solutions to the offspring, taking advantage of potential synergies between problems to drive the solutions to optimality. Nevertheless, in order to operate on the individuals, the design space of each task has to be mapped to a common search space, which is challenging in engineering cases without clear semantic overlap between parameters. Here, we apply a 3D point cloud autoencoder to map the representations from the Cartesian to a unified design representation: the latent space of the autoencoder. The transfer of latent space features between design representations allows the reconstruction of shapes with interpolated characteristics and maintenance of common parts, which potentially improves the performance of the designs in one or more tasks during the optimization. Compared to traditional representations for shape optimization, like free-form deformation, the latent representation enables more representative design modifications, while keeping the baseline characteristics of the learned classes of objects. We demonstrate the efficiency of our approach in an optimization scenario where we minimize the aerodynamic drag of two different car shapes with common underbodies for costefficient vehicle platform design.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.