In this paper, we study nonparametric models allowing for locally stationary regressors and a regression function that changes smoothly over time. These models are a natural extension of time series models with time-varying coefficients. We introduce a kernel-based method to estimate the time-varying regression function and provide asymptotic theory for our estimates. Moreover, we show that the main conditions of the theory are satisfied for a large class of nonlinear autoregressive processes with a time-varying regression function. Finally, we examine structured models where the regression function splits up into time-varying additive components. As will be seen, estimation in these models does not suffer from the curse of dimensionality.Comment: Published in at http://dx.doi.org/10.1214/12-AOS1043 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
We investigate a longitudinal data model with non-parametric regression functions that may vary across the observed individuals. In a variety of applications, it is natural to impose a group structure on the regression curves. Specifically, we may suppose that the observed individuals can be grouped into a number of classes whose members all share the same regression function. We develop a statistical procedure to estimate the unknown group structure from the data. Moreover, we derive the asymptotic properties of the procedure and investigate its finite sample performance by means of a simulation study and a real data example.
Abstract.The chapter introduces the latest developments and results of Iterative Single Data Algorithm (ISDA) for solving large-scale support vector machines (SVMs) problems. First, the equality of a Kernel AdaTron (KA) method (originating from a gradient ascent learning approach) and the Sequential Minimal Optimization (SMO) learning algorithm (based on an analytic quadratic programming step for a model without bias term b) in designing SVMs with positive definite kernels is shown for both the nonlinear classification and the nonlinear regression tasks. The chapter also introduces the classic Gauss-Seidel (GS) procedure and its derivative known as the successive over-relaxation (SOR) algorithm as viable (and usually faster) training algorithms. The convergence theorem for these related iterative algorithms is proven. The second part of the chapter presents the effects and the methods of incorporating explicit bias term b into the ISDA. The algorithms shown here implement the single training data based iteration routine (a.k.a. per-pattern learning). This makes the proposed ISDAs remarkably quick. The final solution in a dual domain is not an approximate one, but it is the optimal set of dual variables which would have been obtained by using any of existing and proven QP problem solvers if they only could deal with huge data sets. IntroductionOne of the mainstream research fields in learning from empirical data by support vector machines (SVMs), and solving both the classification and the regression problems, is an implementation of the incremental learning schemes when the training data set is huge. The challenge of applying SVMs on huge data sets comes from the fact that the amount of computer memory required for a standard quadratic programming (QP) solver grows exponentially as the size of the problem increased. Among several candidates that avoid the use of standard QP solvers, the two learning approaches which recently have drawn the attention are the Iterative Single Data Algorithms (ISDAs), and the sequential minimal optimization (SMO) (Platt, 1998(Platt, , 1999Vogt 2002;Kecman, Vogt, Huang 2003;Huang and Kecman 2004).The ISDAs work on one data point at a time (per-pattern based learning) towards the optimal solution. The Kernel AdaTron (KA) is the earliest ISDA for SVMs, which uses kernel functions to map data into SVMs' high dimensional feature space (Frieß et al. 1998) and performs AdaTron learning (Anlauf and Biehl 1989) in the feature space. The Platt's SMO algorithm is an extreme case of the decomposition methods developed in (Osuna, Freund, Girosi 1997;Joachims 1999), which works on a working set of two data points at a time. Because of the fact that the solution for working set of two can be found analytically, SMO algorithm does not invoke standard QP solvers. Due to its analytical foundation the SMO approach is particularly popular and at the moment the widest used, analyzed and still heavily developing algorithm. At the same time, the KA although providing similar results in solving classific...
Advanced control systems require accurate process models, while processes are often both nonlinear and time variant. After introducing the identification of nonlinear processes with grid-based look-up tables, a new learning algorithm for on-line adaptation of look-up tables is proposed. Using a linear regression approach, this new adaptation algorithm considerably reduces the convergence time in relation to conventional gradient-based adaptation algorithms. An application example and experimental results are shown for the learning feedforward control of the ignition angle of a spark ignition engine.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.