The modal linear regression suggested by Yao and Li (Scand J Stat 41(3):656-671, 2014) models the conditional mode of a response Y given a vector of covariates as a linear function of . To identify the conditional mode of Y given , existing methods utilize a kernel density estimator to obtain the distribution of Y given . Like other kernel-based methods, these require a suitable choice of tuning parameters, and no unified objective function exists for estimating regression parameters. In this paper, we propose a model-based modal linear regression using a family of log-concave distributions. The proposed method does not require tuning parameters and enables us to construct an explicit likelihood function. To estimate the regression parameters with an estimated log-concave density, we turned the log-likelihood into a sum of affine functions using a dual representation of piecewise linear concave functions so that well-known linear programming techniques can be adopted. Simulation studies reveal that the proposed method produces more efficient estimators than kernel-based methods. A real data example is also presented to illustrate the applicability of the proposed method.
Gaussian error distributions are a common choice in traditional regression models for the maximum likelihood (ML) method. However, this distributional assumption is often suspicious especially when the error distribution is skewed or has heavy tails. In both cases, the ML method under normality could break down or lose efficiency. In this paper, we consider the log-concave and Gaussian scale mixture distributions for error distributions. For the log-concave errors, we propose to use a smoothed maximum likelihood estimator for stable and faster computation. Based on this, we perform comparative simulation studies to see the performance of coefficient estimates under normal, Gaussian scale mixture, and log-concave errors. In addition, we also consider real data analysis using Stack loss plant data and Korean labor and income panel data.
Penalized least squares methods are important tools to simultaneously select variables and estimate parameters in linear regression. The penalized maximum likelihood can also be used for the same purpose assuming that the error distribution falls in a certain parametric family of distributions. However, the use of a certain parametric family can suffer a misspecification problem which undermines the estimation accuracy. To give sufficient flexibility to the error distribution, we propose to use the symmetric log-concave error distribution with LASSO penalty. A feasible algorithm to estimate both nonparametric and parametric components in the proposed model is provided. Some numerical studies are also presented showing that the proposed method produces more efficient estimators than some existing methods with similar variable selection performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.