Residual neural networks (ResNets) are a promising class of deep neural networks that have shown excellent performance for a number of learning tasks, e.g., image classification and recognition. Mathematically, ResNet architectures can be interpreted as forward Euler discretizations of a nonlinear initial value problem whose time-dependent control variables represent the weights of the neural network. Hence, training a ResNet can be cast as an optimal control problem of the associated dynamical system. For similar time-dependent optimal control problems arising in engineering applications, parallel-in-time methods have shown notable improvements in scalability. This paper demonstrates the use of those techniques for efficient and effective training of ResNets. The proposed algorithms replace the classical (sequential) forward and backward propagation through the network layers by a parallel nonlinear multigrid iteration applied to the layer domain. This adds a new dimension of parallelism across layers that is attractive when training very deep networks. From this basic idea, we derive multiple layer-parallel methods. The most efficient version employs a simultaneous optimization approach where updates to the network parameters are based on inexact gradient information in order to speed up the training process. Using numerical examples from supervised classification, we demonstrate that the new approach achieves similar training performance to traditional methods, but enables layerparallelism and thus provides speedup over layer-serial methods through greater concurrency. in particular deep residual networks (ResNets) [36], have been breaking human records in various contests and are now central to technology such as image recognition [38,43,45] and natural language processing [6,15,41].The abstract goal of machine learning is to model a function f :for input-output pairs (y, c) from a certain data set Y × C. Depending on the nature of inputs and outputs, the task can be regression or classification. When outputs are available for all samples, parts of the samples, or are not available, this formulation describes supervised, semi-supervised, and unsupervised learning, respectively. The function f can be thought of as an interpolation or approximation function.In deep learning, the function f involves a DNN that aims at transforming the input data using many layers. The layers successively apply affine transformations and element-wise nonlinearities that are parametrized by the network parameters θ. The training problem consists of finding the parameters θ such that (1.1) is satisfied for data elements from a training data set, but also holds for previously unseen data from a validation data set, which has not been used during training. The former objective is commonly modeled as an expected loss and optimization techniques are used to find the parameters that minimize the loss.Despite rapid methodological developments, compute times for training state-of-the-art DNNs can still be prohibitive, measured in the orde...
No abstract
In this paper, an adjoint solver for the multigrid-in-time software library XBraid is presented. XBraid provides a non-intrusive approach for simulating unsteady dynamics on multiple processors while parallelizing not only in space but also in the time domain [60]. It applies an iterative multigrid reduction in time algorithm to existing spatially parallel classical time propagators and computes the unsteady solution parallel in time. Techniques from Automatic Differentiation are used to develop a consistent discrete adjoint solver which provides sensitivity information of output quantities with respect to design parameter changes. The adjoint code runs backwards through the primal XBraid actions and accumulates gradient information parallel in time. It is highly non-intrusive as existing adjoint time propagators can easily be integrated through the adjoint interface. The adjoint code is validated on advection-dominated flow with periodic upstream boundary condition. It provides similar strong scaling results as the primal XBraid solver and offers great potential for speeding up the overall computational costs for sensitivity analysis using multiple processors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.