2006
DOI: 10.1214/009053606000000335
|View full text |Cite
|
Sign up to set email alerts
|

Convergence of algorithms for reconstructing convex bodies and directional measures

Abstract: We investigate algorithms for reconstructing a convex body K in R n from noisy measurements of its support function or its brightness function in k directions u1, . . . , u k . The key idea of these algorithms is to construct a convex polytope P k whose support function (or brightness function) best approximates the given measurements in the directions u1, . . . , u k (in the least squares sense). The measurement errors are assumed to be stochastically independent and Gaussian.It is shown that this procedure i… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
85
0

Year Published

2008
2008
2021
2021

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 58 publications
(87 citation statements)
references
References 29 publications
2
85
0
Order By: Relevance
“…If long-range dependence in Z is not present or negligible (as in the example of a Boolean model of convex grains with uniformly bounded diameter), this independence can (under ergodicity assumptions, approximately) be assured by placing the test line segments W in the definition of γ + far enough apart. We mention that the above convergence result can also be derived under stronger (but somewhat unrealistic) assumptions from [6,Section 9]. The speed of convergence result, also shown there, cannot be transferred directly to the present situation, as the restriction of measures in (4.6) is not a Lipschitz mapping in the Prohorov metric.…”
Section: Estimation Of the Mean Normal Measure From Vertical Sectionsmentioning
confidence: 85%
See 2 more Smart Citations
“…If long-range dependence in Z is not present or negligible (as in the example of a Boolean model of convex grains with uniformly bounded diameter), this independence can (under ergodicity assumptions, approximately) be assured by placing the test line segments W in the definition of γ + far enough apart. We mention that the above convergence result can also be derived under stronger (but somewhat unrealistic) assumptions from [6,Section 9]. The speed of convergence result, also shown there, cannot be transferred directly to the present situation, as the restriction of measures in (4.6) is not a Lipschitz mapping in the Prohorov metric.…”
Section: Estimation Of the Mean Normal Measure From Vertical Sectionsmentioning
confidence: 85%
“…[6]. This means that among all the solutions of (4.4) (and similarly among those of (4.5)) there exists one with support in a finite set T of prescribed directions, where T depends only on the measurement directions v ij .…”
Section: Estimation Of the Mean Normal Measure From Vertical Sectionsmentioning
confidence: 99%
See 1 more Smart Citation
“…It is well known (for example, see Proposition 3.1 in [2]) that for each d 2 there is a constant c(d) depending only on d such that for all ε > 0 there exists an ε-dense set on S d−1 of at most c(d)ε −(d−1) points. For i = 1, .…”
Section: Preliminaries and Notationmentioning
confidence: 99%
“…Algorithm NoisyCovBlaschke utilizes the known fact that −∂g K 0 (tu)/∂t, evaluated at t = 0, equals the brightness function value b K 0 (u), that is, the (n − 1)-dimensional volume of the orthogonal projection of K 0 in the direction u. This connection allows most of the work to be done by a very efficient algorithm, Algorithm NoisyBrightLSQ, designed earlier by Gardner and Milanfar (see [24]) for reconstructing an o-symmetric convex body from finitely many noisy measurements of its brightness function.…”
Section: Introductionmentioning
confidence: 99%