Relationships between hydrologic variables are often nonlinear. Usually, the functional form of such a relationship is not known a priori. A multivariate, nonparametric regression methodology is provided here for approximating the underlying regression function using locally weighted polynomials. Locally weighted polynomials consider the approximation of the target function through a Taylor series expansion of the function in the neighborhood of the point of estimate. Cross‐validatory procedures for the selection of the size of the neighborhood over which this approximation should take place and for the order of the local polynomial to use are provided and shown for some simple situations. The utility of this nonparametric regression approach is demonstrated through an application to nonparametric short‐term forecasts of the biweekly Great Salt Lake volume. Blind forecasts up to 1 year in the future using the 1847–2004 time series of the Great Salt Lake are presented.
Kernel density estimation methods have recently been introduced as viable and flexible alternativesto parametric methods for flood frequency estimation. Key properties of such estimators are reviewed in this paper. Attention is focused on the selection of the kernel function and the bandwidth. These are the parameters of the method. Existing techniques for kernel and bandwidth selection are applied to three situations: Gaussian data, skewed data (three-parameter gamma), and mixture data. The intent was to investigate issues relevant to parameter estimation as well as to the likely performance of these methods with the small sample sizes typical in hydrology. Bandwidths chosen by minimizing a performance criterion related to the distribution function lead to much smaller mean square errors of tail probabilities than those chosen by cross-validation methods designed for density estimation. However, this can lead to estimates that degenerate to the empirical distribution function, and hence to an unusable flood frequency curve. Variable bandwidths with heavy tailed kernels appear to do best. Kernel estimators are increasingly more competitive in terms of mean square error of estimate as the underlying distribution gets more complex. INTRODUCTIONEstimating exceedance frequencies of annual maximum flood events at a gaged site is a classical problem of hydrology. A finite data set (usually 20-100 points) is used to extrapolate flood magnitudes corresponding to recurrence intervals of up to 1000 years. A "curve-fitting" or parametric approach is traditionally used for the purpose. An a priori choice of a probability distribution function (e.g., log Pearson type iii) is made, and its parameters are estimated using one of several methods (e.g., moments, entropy, or likelihood maximization). Despite intensive research and legislation, no particular curve has emerged as a clear "winner" across different sites. Indeed, for the typical sample sizes given above, methods (e.g., the Kolmogorov-Smirnov test or the chi-square test) for selecting between probability distributions at a site cannot discriminate among candidate families of distributions and among members of a family [e.g., Kite, 1977]. A comparison between a variable kernel estimate (VK-C-AC) of the cumulative distribution function tCDF) for the St. Mary's River data used by Kite [1977] and Kite's results for a variety of parametric distributions is presented in Figure 1. The kernel estimate is close to the empirical CDF, and none of the parametric alternatives appear reasonable. The empirical CDF and the kernel estimator suggest a bimodal probability density for the data. A hydrologist forced to choose between the parametric alternatives would find from Kite that he cannot discriminate among two-parameter lognormal, three-parameter lognormal, type I extremal, Pearson type III, and log Pearson type III on the basis of standard tests. However, none of these distributions provides a fit consistent with the empirical distribution function. There are often a number of cau...
Vaginal pulse amplitude (VPA) has been the most commonly analyzed signal of the vaginal photoplethysmograph. Frequent, large, and variable-morphology artifacts typically have crowded this signal. These artifacts usually were corrected by hand, which may have introduced large differences in outcomes across laboratories. VPA signals were collected from 22 women who viewed a neutral film and a sexual film. An automated, wavelet-based, denoising algorithm was compared against the uncorrected signal and the signal corrected in the typical manner (by hand). The automated wavelet denoising resulted in the same pattern of results as the hand-corrected signal. The wavelet procedure automated artifact reduction in the VPA, and this mathematical instantiation permits the comparison of competing methods to improve signal:noise in the future.
In the design of spatial linkages, the finite-position kinematics is fully specified by the position of the joint axes, i.e., a set of lines in space. However, most of the tasks have additional requirements regarding motion smoothness, obstacle avoidance, force transmission, or physical dimensions, to name a few. Many of these additional performance requirements are fully or partially independent of the kinematic task and can be fulfilled using a link-based optimization after the set of joint axes has been defined. This work presents a methodology to optimize the links of spatial mechanisms that have been synthesized for a kinematic task, so that additional requirements can be satisfied. It is based on considering the links as anchored to sliding points on the set of joint axes, and making the additional requirements a function of the location of the link relative to the two joints that it connects. The optimization of this function is performed using a hybrid algorithm, including a genetic algorithm (GA) and a gradient-based minimization solver. The combination of the kinematic synthesis together with the link optimization developed here allows the designer to interactively monitor, control, and adjust objectives and constraints, to yield practical solutions to realistic spatial mechanism design problems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.