Response surfaces are common surrogates for expensive computer simulations in engineering analysis. However, the cost of fitting an accurate response surface increases exponentially as the number of model inputs increases, which leaves response surface construction intractable for high-dimensional, nonlinear models. We describe ridge approximation for fitting response surfaces in several variables. A ridge function is constant along several directions in its domain, so fitting occurs on the coordinates of a low-dimensional subspace of the input space. We review essential theory for ridge approximation---e.g., the best mean-squared approximation and an optimal low-dimensional subspace---and we prove that the gradient-based active subspace is near-stationary for the least-squares problem that defines an optimal subspace. Motivated by the theory, we propose a computational heuristic that uses an estimated active subspace as an initial guess for a ridge approximation fitting problem. We show a simple example where the heuristic fails, which reveals a type of function for which the proposed approach is inappropriate. We then propose a simple alternating heuristic for fitting a ridge function, and we demonstrate the effectiveness of the active subspace initial guess applied to an airfoil model of drag as a function of its 18 shape parameters
Inexpensive surrogates are useful for reducing the cost of science and engineering studies involving large-scale, complex computational models with many input parameters. A ridge approximation is one class of surrogate that models a quantity of interest as a nonlinear function of a few linear combinations of the input parameters. When used in parameter studies (e.g., optimization or uncertainty quantification), ridge approximations allow the low-dimensional structure to be exploited, reducing the effective dimension. We introduce a new, fast algorithm for constructing a ridge approximation where the nonlinear function is a polynomial. This polynomial ridge approximation is chosen to minimize least squared mismatch between the surrogate and the quantity of interest on a given set of inputs. Naively, this would require optimizing both the polynomial coefficients and the linear combination of weights; the latter of which define a low-dimensional subspace of the input space. However, given a fixed subspace the optimal polynomial can be found by solving a linear least-squares problem. Hence using variable projection the polynomial can be implicitly defined, leaving an optimization problem over the subspace alone. Here we develop an algorithm that finds this polynomial ridge approximation by minimizing over the Grassmann manifold of low-dimensional subspaces using a Gauss-Newton method. Our Gauss-Newton method has superior theoretical guarantees and faster convergence on our numerical examples than the alternating approach for polynomial ridge approximation earlier proposed by Constantine, Eftekhari, Hokanson, and Ward [https://doi.org/10.1016/j.cma.2017.07.038] that alternates between (i) optimizing the polynomial coefficients given the subspace and (ii) optimizing the subspace given the coefficients.
Abstract. To what extent do the vibrations of a mechanical system reveal its composition? Despite innumerable applications and mathematical elegance, this question often slips through those cracks that separate courses in mechanics, differential equations, and linear algebra. We address this omission by detailing a classical finite dimensional example: the use of frequencies of vibration to recover positions and masses of beads vibrating on a string. First we derive the equations of motion, then compare the eigenvalues of the resulting linearized model against vibration data measured from our laboratory's monochord. More challenging is the recovery of masses and positions of the beads from spectral data, a problem for which a variety of elegant algorithms exist. After presenting one such method based on orthogonal polynomials in a manner suitable for advanced undergraduates, we confirm its efficacy through physical experiment. We encourage readers to conduct their own explorations using the numerous data sets we provide.
Abstract. The modern ability to collect vast quantities of data presents a challenge for parameter estimation problems. Posed as a nonlinear least squares problem fitting a model to the data, the cost of each iteration grows linearly with the amount of data; with large data, it can easily become too expensive to perform many iterations. Here we develop an approach that projects the data onto a low-dimensional subspace that preserves the quality of the resulting parameter estimates. We provide results from both an optimization and a statistical perspective that shows that accurate parameter estimates are obtained when the subspace angles between this projection and the Jacobian of the model at the current iterate remain small. However, for this approach to reduce computational complexity, both the projected model and projected Jacobian must be computed inexpensively. This places a constraint on the pairs of models and subspaces for which this approach provides a computational speedup. Here we consider the exponential fitting problem projected onto the range of a Vandermonde matrix, for which the projected model and projected Jacobian can be computed in closed form using a generalized geometric sum formula. We further provide an inexpensive heuristic that picks this Vandermonde matrix so that the subspace angles with the Jacobian remain small and use this heuristic to update the subspace during optimization. Although the asymptotic cost still depends on the data dimension, the overall cost of solving this sequence of projected nonlinear least squares problems is less expensive than the original. Applied to the exponential fitting problem, this yields an algorithm that is not only faster in the limit of large data than the conventional nonlinear least squares approach, but is also faster than subspace based approaches such as HSVD.
The Sanathanan-Koerner iteration developed in 1963 is classical approach for rational approximation. This approach multiplies both sides of the approximation by the denominator polynomial yielding a linear problem and then introduces a weight at each iteration to correct for this linearization. Unfortunately this weight introduces a numerical instability. We correct this instability by constructing Vandermonde matrices for both the numerator and denominator polynomials using the Arnoldi iteration with an initial vector that enforces this weighting. This Stabilized Sanathanan-Koerner iteration corrects the instability and yields accurate rational approximations of arbitrary degree. Using a multivariate extension of Vandermonde with Arnoldi, we can apply the Stabilized Sanathanan-Koerner iteration to multivariate rational approximation problems. The resulting multivariate approximations are often significantly better than existing techniques and display a more uniform accuracy throughout the domain.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.