2018
DOI: 10.1080/01621459.2017.1407771
|View full text |Cite
|
Sign up to set email alerts
|

A Computational Framework for Multivariate Convex Regression and Its Variants

Abstract: We study the nonparametric least squares estimator (LSE) of a multivariate convex regression function. The LSE, given as the solution to a quadratic program with O(n 2 ) linear constraints (n being the sample size), is difficult to compute for large problems. Exploiting problem specific structure, we propose a scalable algorithmic framework based on the augmented Lagrangian method to compute the LSE. We develop a novel approach to obtain smooth convex approximations to the fitted (piecewise affine) convex LSE … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
97
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 67 publications
(97 citation statements)
references
References 41 publications
0
97
0
Order By: Relevance
“…The squared error prox function is given by Mazumder et al (2019) show that the optimization in eq. (3) with squared error prox function is equivalent to the convex program…”
Section: Squared Error Prox Functionmentioning
confidence: 99%
“…The squared error prox function is given by Mazumder et al (2019) show that the optimization in eq. (3) with squared error prox function is equivalent to the convex program…”
Section: Squared Error Prox Functionmentioning
confidence: 99%
“…We use the version of the data that incorporates the minor corrections found by Gilley and Pace (1996). We fit the fully convex or concave double-cone algorithm with the 14-dimensional predictor, fitted by using the augmented Lagrangian method that was developed by Mazumder et al (2015). Our procedure, assuming normal errors, yields p-value 0.01 (if the error distribution is left completely unspecified our bootstrap procedure, developed in theorem 1, can be used and yields a p-value of 0.02) and rejects the linear model specification, as used in Harrison and Rubinfeld (1978), whereas the method of Stute et al (1998) yields a p-value of more than 0.2.…”
Section: Testing Against a Linear Model Assuming Additivitymentioning
confidence: 99%
“…[1] gives an O(n log n) algorithm for unweighted L 2 Lipschitz isotonic regression and a O(n poly(log n)) time algorithm for Lipschitz unimodal regression. [24] describes (multidimensional) L 2 convex regression algorithms based quadratic programming. Fefferman [8] studied a closely related problem of smooth interpolation of data in Euclidean space minimizing a certain norm defined on the derivatives of the function.…”
Section: Introductionmentioning
confidence: 99%
“…Motivated by these applications, the problems have been studied in statistics (as a form of nonparametric regression), investigating, e.g., their consistency as estimators and their rate of convergence [13,14, 4].While fast algorithms for isotonic-regression variants have been designed [27], both [22] and [3] list shape constraints beyond monotonicity as important challenges. For example, fitting (multidimensional) convex functions is mostly done via quadratic or linear programming solvers [24]. In his PhD thesis, Balzs writes that current "methods are computationally too expensive for practical use, [so] their analysis is used for the design of a heuristic training algorithm which is empirically evaluated" [4, p. 1].This lack of efficient algorithms motivated the present work.…”
mentioning
confidence: 99%
See 1 more Smart Citation