Many sufficient dimension reduction methods for univariate regression have been extended to multivariate regression. Sliced average variance estimation (SAVE) has the potential to recover more reductive information and recent development enables us to test the dimension and predictor effects with distributions commonly used in the literature. In this paper, we aim to extend the functionality of the SAVE to multivariate regression. Toward the goal, we propose three new methods. Numerical studies and real data analysis demonstrate that the proposed methods perform well.
Several sparseness penalties have been suggested for delivery of good predictive performance in automatic variable selection within the framework of regularization. All assume that the true model is sparse. We propose a penalty, a convex combination of the - and-norms, that adapts to a variety of situations including sparseness and nonsparseness, grouping and nongrouping. The proposed penalty performs grouping and adaptive regularization. In addition, we introduce a novel homotopy algorithm utilizing subgradients for developing regularization solution surfaces involving multiple regularizers. This permits efficient computation and adaptive tuning. Numerical experiments are conducted using simulation. In simulated and real examples, the proposed penalty compares well against popular alternatives.
When applying the support vector machine (SVM) to high-dimensional
classification problems, we often impose a sparse structure in the SVM to
eliminate the influences of the irrelevant predictors. The lasso and other
variable selection techniques have been successfully used in the SVM to perform
automatic variable selection. In some problems, there is a natural hierarchical
structure among the variables. Thus, in order to have an interpretable SVM
classifier, it is important to respect the heredity principle when enforcing
the sparsity in the SVM. Many variable selection methods, however, do not
respect the heredity principle. In this paper we enforce both sparsity and the
heredity principle in the SVM by using the so-called structured variable
selection (SVS) framework originally proposed in Yuan, Joseph and Zou (2007).
We minimize the empirical hinge loss under a set of linear inequality
constraints and a lasso-type penalty. The solution always obeys the desired
heredity principle and enjoys sparsity. The new SVM classifier can be
efficiently fitted, because the optimization problem is a linear program.
Another contribution of this work is to present a nonparametric extension of
the SVS framework, and we propose nonparametric heredity SVMs. Simulated and
real data are used to illustrate the merits of the proposed method.Comment: Published in at http://dx.doi.org/10.1214/07-EJS125 the Electronic
Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of
Mathematical Statistics (http://www.imstat.org
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.