2016
DOI: 10.1137/15m1052184
|View full text |Cite
|
Sign up to set email alerts
|

Transformations and Hardy--Krause Variation

Abstract: Using a multivariable Faa di Bruno formula we give conditions on transformations τ : [0, 1] m → X where X is a closed and bounded subset of R d such that f • τ is of bounded variation in the sense of Hardy and Krause for all f ∈ C d (X ). We give similar conditions for f •τ to be smooth enough for scrambled net sampling to attain O(n −3/2+ ) accuracy. Some popular symmetric transformations to the simplex and sphere are shown to satisfy neither condition. Some other transformations due to Fang and Wang (1993)… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 22 publications
(22 citation statements)
references
References 20 publications
(27 reference statements)
0
22
0
Order By: Relevance
“…This is shown in Figure 3. Throughout this simulation we keep n = 4 6 . The intervals shown in red fail to contain the true value of µ, and as a result it can be seen that we have desired control over the coverage of the confidence intervals.…”
Section: Asymptotic Normalitymentioning
confidence: 99%
See 1 more Smart Citation
“…This is shown in Figure 3. Throughout this simulation we keep n = 4 6 . The intervals shown in red fail to contain the true value of µ, and as a result it can be seen that we have desired control over the coverage of the confidence intervals.…”
Section: Asymptotic Normalitymentioning
confidence: 99%
“…Measure preserving mapping from the unit cube to such shapes work very well for plain Monte Carlo. Unfortunately, the composition of the integrand with the mapping may fail to have even mild smoothness properties that QMC exploits [6].…”
mentioning
confidence: 99%
“…What we should take from the above bounds is the fundamental role the hypothesis space choice plays in the approximation of the true error. If H is not too complex in the VC or Rademacher sense, that is, if it does not have a large variety of functions, then their empirical loss is fairly close to the true loss for any dataset S. On the other hand, if either the VC dimension or the Rademacher complexity is infinite, then we cannot possibly expect to learn and we should look for simpler solutions 5 .…”
Section: The Foundationsmentioning
confidence: 99%
“…That is, it will receive a dataset S ⊆ Z n sampled from µ n as an input, and sample a hypothesis h according to the induced distribution P (H|S) in the hypothesis class H parametrized by the weight space W. Notice that H is now a random variable 4 with values in H, whose distribution is characterized by the algorithm, defining a stochastic learning map. Thus, it makes sense to address the information shared by S and the output, or in other words, to how the prior's entropy H(W ) induced by the algorithm changes when it reads the dataset 5 .…”
Section: Stability In Stochastic Algorithmsmentioning
confidence: 99%
See 1 more Smart Citation