2011
DOI: 10.1007/s10957-011-9865-8
|View full text |Cite
|
Sign up to set email alerts
|

Strict L∞ Isotonic Regression

Abstract: Given a function f and weights w on the vertices of a directed acyclic graph G, an isotonic regression of (f, w) is an order-preserving real-valued function that minimizes the weighted distance to f among all order-preserving functions. When the distance is given via the supremum norm there may be many isotonic regressions. One of special interest is the strict isotonic regression, which is the limit of p-norm isotonic regression as p approaches infinity. Algorithms for determining it are given. We also examin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
28
0

Year Published

2011
2011
2015
2015

Publication Types

Select...
5

Relationship

3
2

Authors

Journals

citations
Cited by 10 publications
(28 citation statements)
references
References 21 publications
0
28
0
Order By: Relevance
“…A similar situation occurs for L ∞ isotonic regression. Algorithms have been developed for strict L ∞ isotonic regression [23,25], which is the limit, as p → ∞, of L p isotonic regression. This is also known as "best best" L ∞ regression [13].…”
Section: Final Commentsmentioning
confidence: 99%
“…A similar situation occurs for L ∞ isotonic regression. Algorithms have been developed for strict L ∞ isotonic regression [23,25], which is the limit, as p → ∞, of L p isotonic regression. This is also known as "best best" L ∞ regression [13].…”
Section: Final Commentsmentioning
confidence: 99%
“…It has been called the "best best" L ∞ isotonic regression [21], and the limit process is known as the "Polya algorithm" [28]. For arbitrary dags, given the transitive closure Strict can be determined in Θ(n 2 log n) time [37].…”
Section: ∞ Isotonic Regressionmentioning
confidence: 99%
“…Strict minimizes the number of large errors [37], in that, for any dag G and data (f , w), if g = Strict(f, w) is an isotonic function on G, then there is a C > 0 such that g has more vertices with regression error ≥ C than does Strict(f, w), and for any D > C, g and Strict(f, w) have the same number of vertices with regression error ≥ D (there might be a d < C where g has fewer vertices with regression error ≥ d than does Strict, but the emphasis is on large errors). For example, for the function with values 3, 1, 2.5 and weights 2, 2, 1, Strict is 2, 2, 2.5, as are all L p isotonic regressions for 1 < p < ∞, but all of the other L ∞ regression mappings considered here have a nonzero error for the third value.…”
Section: ∞ Isotonic Regressionmentioning
confidence: 99%
See 2 more Smart Citations