Abstract. The radiation (reaction, Robin) boundary condition for the diffusion equation is widely used in chemical and biological applications to express reactive boundaries. The underlying trajectories of the diffusing particles are believed to be partially absorbed and partially reflected at the reactive boundary, however, the relation between the reaction constant in the Robin boundary condition and the reflection probability is not well defined. In this paper we define the partially reflected process as a limit of the Markovian jump process generated by the Euler scheme for the underlying Itô dynamics with partial boundary reflection. Trajectories that cross the boundary are terminated with probability P √ ∆t and otherwise are reflected in a normal or oblique direction. We use boundary layer analysis of the corresponding master equation to resolve the non-uniform convergence of the probability density function of the numerical scheme to the solution of the Fokker-Planck equation in a half space, with the Robin constant κ. The boundary layer equation is of the Wiener-Hopf type. We show that the Robin boundary condition is recovered if and only if trajectories are reflected in the co-normal direction σn, where σ is the (possibly anisotropic) constant diffusion matrix and n is the unit normal to the boundary. Otherwise, the density satisfies an oblique derivative boundary condition. The constant κ is related to P by κ = rP √ σn, where r = 1/ √ π and σn = n T σn. The reflection law and the relation are new for diffusion in higher-dimensions.
Prolate spheroidal wave functions (PSWFs) play an important role in various areas, from physics (e.g. wave phenomena, fluid dynamics) to engineering (e.g. signal processing, filter design). Even though the significance of PSWFs was realized at least half a century ago, and they frequently occur in applications, their analytical properties have not been investigated as much as those of many other special functions. In particular, despite some recent progress, the gap between asymptotic expansions and numerical experience, on the one hand, and rigorously proven explicit bounds and estimates, on the other hand, is still rather wide. This paper attempts to improve the current situation. We analyze the differential operator associated with PSWFs, to derive fairly tight estimates on its eigenvalues. By combining these inequalities with a number of standard techniques, we also obtain several other properties of the PSFWs. The results are illustrated via numerical experiments.
Prolate spheroidal wave functions (PSWFs) play an important role in various areas, from physics (e.g. wave phenomena, fluid dynamics) to engineering (e.g. signal processing, filter design). One of the principal reasons for the importance of PSWFs is that they are a natural and efficient tool for computing with bandlimited functions, that frequently occur in the abovementioned areas. This is due to the fact that PSWFs are the eigenfunctions of the integral operator, that represents timelimiting followed by lowpassing. Needless to say, the behavior of this operator is governed by the decay rate of its eigenvalues. Therefore, investigation of this decay rate plays a crucial role in the related theory and applications -for example, in construction of quadratures, interpolation, filter design, etc.The significance of PSWFs and, in particular, of the decay rate of the eigenvalues of the associated integral operator, was realized at least half a century ago. Nevertheless, perhaps surprisingly, despite vast numerical experience and existence of several asymptotic expansions, a non-trivial explicit upper bound on the magnitude of the eigenvalues has been missing for decades.The principal goal of this paper is to close this gap in the theory of PSWFs. We analyze the integral operator associated with PSWFs, to derive fairly tight non-asymptotic upper bounds on the magnitude of its eigenvalues. Our results are illustrated via several numerical experiments.
We present a randomized algorithm for the approximate nearest neighbor problem in d-dimensional Euclidean space. Given N points fx j g in R d , the algorithm attempts to find k nearest neighbors for each of x j , where k is a user-specified integer parameter. The algorithm is iterative, and its running time requirements are proportional to T · N · ðd · ðlog dÞ þ k · ðd þ log kÞ · ðlog NÞÞþ N · k 2 · ðd þ log kÞ, with T the number of iterations performed. The memory requirements of the procedure are of the order N · ðd þ kÞ. A by-product of the scheme is a data structure, permitting a rapid search for the k nearest neighbors among fx j g for an arbitrary point x ∈ R d . The cost of each such query is proportional to T · ðd · ðlog dÞ þ logðN∕kÞ · k · ðd þ log kÞÞ, and the memory requirements for the requisite data structure are of the order N · ðd þ kÞ þ T · ðd þ NÞ. The algorithm utilizes random rotations and a basic divide-and-conquer scheme, followed by a local graph search. We analyze the scheme's behavior for certain types of distributions of fx j g and illustrate its performance via several numerical examples. data mining | dimensionality reduction | fast random rotations I n this paper, we describe an algorithm for finding approximate nearest neighbors (ANNs) in d-dimensional Euclidean space for each of N user-specified points fx j g. For each point x j , the scheme produces a list of k "suspects" that have high probability of being the k closest points (nearest neighbors) in the Euclidean metric. Those of the suspects that are not among the "true" nearest neighbors are close to being so.We present several measures of performance (in terms of statistics of the k chosen suspected nearest neighbors), for different types of randomly generated datasets consisting of N points in R d . Unlike other ANN algorithms that have been recently proposed (see, e.g., ref. 1), the method of this paper does not use locality-sensitive hashing. Instead we use a simple randomized divide-and-conquer approach. The basic algorithm is iterated several times and then followed by a local graph search.The performance of any fast ANN algorithm must deteriorate as the dimension d increases. Although the running time of our algorithm only grows as d · log d, the statistics of the selected approximate nearest neighbors deteriorate as the dimension d increases. We provide bounds for this deterioration (both analytically and empirically), which occurs reasonably slowly as d increases. Although the actual estimates are fairly complicated, it is reasonable to say that in 20 dimensions the scheme performs extremely well, and the performance does not seriously deteriorate until d is approximately 60. At d ¼ 100, the degradation of the statistics displayed by the algorithm is quite noticeable.An outline of our algorithm is as follows:1. Choose a random rotation, acting on R d , and rotate the N given points. 2. Take the first coordinate and divide the dataset into two boxes, where the boxes are divided by finding the median in the first coordina...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.