N‐body simulations are a very important tool in the study of the formation of large‐scale structures. Much of the progress in understanding the physics of the high‐redshift universe and comparison with observations would not have been possible without N‐body simulations. Given the importance of this tool, it is essential to understand its limitations, as ignoring the limitations can easily lead to interesting but unreliable results. In this paper we study the limitations arising out of the finite size of the simulation volume. This finite size implies that modes larger than the size of the simulation volume are ignored and a truncated power spectrum is simulated. If the simulation volume is large enough then the mass in collapsed haloes expected from the full power spectrum and from the truncated power spectrum should match. We propose a quantitative measure based on this approach that allows us to compute the minimum box size for an N‐body simulation. We find that the required box size for simulations of the ΛCDM model at high redshifts is much larger than is typically used. We can also use this approach to quantify the effect of perturbations at large scales for power‐law models and we find that if we fix the scale of non‐linearity, the required box size becomes very large as the index becomes small. The appropriate box size computed using this approach is also an appropriate choice for the transition scale when tools like MAP, which add the contribution of the missing power, are used.
We present a detailed analysis of the error budget for the TreePM method for doing cosmological N-Body simulations. It is shown that the choice of filter for splitting the inverse square force into short and long range components suggested in Bagla (2002) is close to optimum. We show that the error in the long range component of the force contributes very little to the total error in force. Errors introduced by the tree approximation for the short range force are different from those for the inverse square force, and these errors dominate the total error in force. We calculate the distribution function for error in force for clustered and unclustered particle distributions. This gives an idea of the error in realistic situations for different choices of parameters of the TreePM algorithm. We test the code by simulating a few power law models and checking for scale invariance.
Removing the receiver ghost before migration provides better low and high frequency response as well as a higher signal-to-noise ratio. We recognize these benefits for preprocessing steps like multiple suppression and velocity analysis. In this paper, we modify a previously published bootstrap approach that self-determines its own parameters for receiver deghosting in a x t window. Similarly to the x t bootstrap method, the recorded data is first used to create a mirror data set through a 1D ray-tracing-based normal moveout correction method. The recorded and mirror data are then transformed into p domain and used to jointly invert for the receiver-ghost-free data. We apply this new algorithm to two field data sets with: 1) constant streamer depth of 27 m; and 2) variable streamer depth from 10 to 50 m. Our deghosting method effectively removes the receiver ghost, and the resulting image has broader bandwidth and a higher signal-to-noise ratio.
We study the interplay of clumping at small scales with the collapse and relaxation of perturbations at much larger scales. We present results of our analysis when the large‐scale perturbation is modelled as a plane wave. We find that in the absence of substructure, collapse leads to formation of a pancake with multistream regions. Dynamical relaxation of the plane wave is faster in the presence of substructure. Scattering of substructures and the resulting enhancement of transverse motions of haloes in the multistream region lead to a thinner pancake. In turn, collapse of the plane wave leads to formation of more massive collapsed haloes as compared to the collapse of substructure in the absence of the plane wave. The formation of more massive haloes happens without any increase in the total mass in collapsed haloes. A comparison with the Burgers equation approach in the absence of any substructure suggests that the preferred value of effective viscosity depends primarily on the number of streams in a region.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.