The dispersion of a point set in [0, 1] d is the volume of the largest axis parallel box inside the unit cube that does not intersect with the point set. We study the expected dispersion with respect to a random set of n points determined by an i.i.d. sequence of uniformly distributed random variables. Depending on the number of points n and the dimension d we provide an upper and lower bound of the expected dispersion. In particular, we show that the minimal number of points required to achieve an expected dispersion less than ε ∈ (0, 1) depends linearly on the dimension d.
We compute the integral of a function or the expectation of a random variable with minimal cost and use, for our new algorithm and for upper bounds of the complexity, i.i.d. samples. Under certain assumptions it is possible to select a sample size based on a variance estimation, or -more generally -based on an estimation of a (central absolute) p-moment. That way one can guarantee a small absolute error with high probability, the problem is thus called solvable. The expected cost of the method depends on the p-moment of the random variable, which can be arbitrarily large.In order to prove the optimality of our algorithm we also provide lower bounds. These bounds apply not only to methods based on i.i.d. samples but also to general randomized algorithms. They show that -up to constantsthe cost of the algorithm is optimal in terms of accuracy, confidence level, and norm of the particular input random variable. Since the considered classes of random variables or integrands are very large, the worst case cost would be infinite. Nevertheless one can define adaptive stopping rules such that for each input the expected cost is finite.We contrast these positive results with examples of integration problems that are not solvable.
We study the L ∞ -approximation of d-variate functions from Hilbert spaces via linear functionals as information. It is a common phenomenon in tractability studies that unweighted problems (with each dimension being equally important) suffer from the curse of dimensionality in the deterministic setting, that is, the number n(ε, d) of information needed in order to solve a problem to within a given accuracy ε grows exponentially in d. We show that for certain approximation problems in periodic tensor product spaces, in particular Korobov spaces with smoothness r > 1/2, switching to the randomized setting can break the curse of dimensionality, now having polynomial tractability, namely n(ε, d) ε −2 d (1 + log d). Similar benefits of Monte Carlo methods in terms of tractability have only been known for integration problems so far.
We consider the order of convergence for linear and nonlinear Monte Carlo approximation of compact embeddings from Sobolev spaces of dominating mixed smoothness defined on the torus T d into the space L ∞ (T d ) via methods that use arbitrary linear information. These cases are interesting because we can gain a speedup of up to 1/2 in the main rate compared to the worst case approximation. In doing so we determine the rate for some cases that have been left open by Fang and Duan.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.