In this paper, we present results from the weak‐lensing shape measurement GRavitational lEnsing Accuracy Testing 2010 (GREAT10) Galaxy Challenge. This marks an order of magnitude step change in the level of scrutiny employed in weak‐lensing shape measurement analysis. We provide descriptions of each method tested and include 10 evaluation metrics over 24 simulation branches. GREAT10 was the first shape measurement challenge to include variable fields; both the shear field and the point spread function (PSF) vary across the images in a realistic manner. The variable fields enable a variety of metrics that are inaccessible to constant shear simulations, including a direct measure of the impact of shape measurement inaccuracies, and the impact of PSF size and ellipticity, on the shear power spectrum. To assess the impact of shape measurement bias for cosmic shear, we present a general pseudo‐Cℓ formalism that propagates spatially varying systematics in cosmic shear through to power spectrum estimates. We also show how one‐point estimators of bias can be extracted from variable shear simulations. The GREAT10 Galaxy Challenge received 95 submissions and saw a factor of 3 improvement in the accuracy achieved by other shape measurement methods. The best methods achieve sub‐per cent average biases. We find a strong dependence on accuracy as a function of signal‐to‐noise ratio, and indications of a weak dependence on galaxy type and size. Some requirements for the most ambitious cosmic shear experiments are met above a signal‐to‐noise ratio of 20. These results have the caveat that the simulated PSF was a ground‐based PSF. Our results are a snapshot of the accuracy of current shape measurement methods and are a benchmark upon which improvement can be brought. This provides a foundation for a better understanding of the strengths and limitations of shape measurement methods.
12 pages of main text plus 19 pages of appendices/references. Please see http://www.great08challenge.info for the first release of simulations, list of changes to this document and a version with higher resolution figures. AOAS accepted subject to minor revisionInternational audienceThe GRavitational lEnsing Accuracy Testing 2008 (GREAT08) Challenge focuses on a problem that is of crucial importance for future observations in cosmology. The shapes of distant galaxies can be used to determine the properties of dark energy and the nature of gravity, because light from those galaxies is bent by gravity from the intervening dark matter. The observed galaxy images appear distorted, although only slightly, and their shapes must be precisely disentangled from the effects of pixelisation, convolution and noise. The worldwide gravitational lensing community has made significant progress in techniques to measure these distortions via the Shear TEsting Program (STEP). Via STEP, we have run challenges within our own community, and come to recognise that this particular image analysis problem is ideally matched to experts in statistical inference, inverse problems and computational learning. Thus, in order to continue the progress seen in recent years, we are seeking an infusion of new ideas from these communities. This document details the GREAT08 Challenge for potential participants
A new model-independent method is presented for the analysis of pulsar timing data and the estimation of the spectral properties of an isotropic gravitational wave background (GWB). Taking a Bayesian approach, we show that by rephrasing the likelihood we are able to eliminate the most costly aspects of computation normally associated with this type of data analysis. When applied to the International Pulsar Timing Array Mock Data Challenge data sets this results in speedups of approximately 2–3 orders of magnitude compared to established methods, in the most extreme cases reducing the run time from several hours on the high performance computer ‘‘DARWIN’’ to less than a minute on a normal work station. Because of the versatility of this approach, we present three applications of the new likelihood. In the low signal-to-noise regime we sample directly from the power spectrum coefficients of the GWB signal realization. In the high signal-to-noise regime, where the data can support a large number of coefficients, we sample from the joint probability density of the power spectrum coefficients for the individual pulsars and the GWB signal realization using a ‘‘guided Hamiltonian sampler’’ to sample efficiently from this high-dimensional (1000) space. Critically in both these cases we need make no assumptions about the form of the power spectrum of the GWB, or the individual pulsars. Finally, we show that, if desired, a power-law model can still be fitted during sampling. We then apply this method to a more complex data set designed to represent better a future International Pulsar Timing Array or European Pulsar Timing Array data release. We show that even in challenging cases where the data features large jumps of the order 5 years, with observations spanning between 4 and 18 years for different pulsars and including steep red noise processes we are able to parametrize the underlying GWB signal correctly. Finally we present a method for characterizing the spatial correlation between pulsars on the sky, making no assumptions about the form of that correlation, and therefore providing the only truly general Bayesian method of confirming a GWB detection from pulsar timing data
Stellar radial velocity (RV) measurements have proven to be a very successful method for detecting extrasolar planets. Analysing RV data to determine the parameters of the extrasolar planets is a significant statistical challenge owing to the presence of multiple planets and various degeneracies between orbital parameters. Determining the number of planets favoured by the observed data is an even more difficult task. Bayesian model selection provides a mathematically rigorous solution to this problem by calculating marginal posterior probabilities of models with different number of planets, but the use of this method in extrasolar planetary searches has been hampered by the computational cost of the evaluating Bayesian evidence. None the less, Bayesian model selection has the potential to improve the interpretation of existing observational data and possibly detect yet undiscovered planets. We present a new and efficient Bayesian method for determining the number of extrasolar planets, as well as for inferring their orbital parameters, without having to calculate directly the Bayesian evidence for models containing a large number of planets. Instead, we work iteratively and at each iteration obtain a conservative lower limit on the odds ratio for the inclusion of an additional planet into the model. We apply this method to simulated data sets containing one and two planets and successfully recover the correct number of planets and reliable constraints on the orbital parameters. We also apply our method to RV measurements of HD 37124, 47 Ursae Majoris and HD 10180. For HD 37124, we confirm that the current data strongly favour a three‐planet system. We find strong evidence for the presence of a fourth planet in 47 Ursae Majoris, but its orbital period is suspiciously close to 1 yr, casting doubt on its validity. For HD 10180 we find strong evidence for a six‐planet system.
We constrain cosmological parameters by analysing the angular power spectra of the Baryon Oscillation Spectroscopic Survey DR12 galaxies, a spectroscopic follow-up of around 1.3 million SDSS galaxies over 9,376 deg 2 with an effective volume of ∼ 6.5 (Gpc h −1 ) 3 in the redshift range 0.15 ≤ z < 0.80. We split this sample into 13 tomographic bins (∆z = 0.05); angular power spectra were calculated using a Pseudo-C estimator, and covariance matrices were estimated using log-normal simulated maps. Cosmological constraints obtained from these data were combined with constraints from Planck CMB experiment as well as the JLA supernovae compilation. Considering a wCDM cosmological model measured on scales up to k max = 0.07h Mpc −1 , we constrain a constant dark energy equation-of-state with a ∼ 4% error at the 1σ level: w 0 = −0.993 +0.046 −0.043 , together with Ω m = 0.330 ± 0.012, Ω b = 0.0505 ± 0.002, S 8 ≡ σ 8 Ω m /0.3 = 0.863 ± 0.016, and h = 0.661 ± 0.012. For the same combination of datasets, but now considering a ΛCDM model with massive neutrinos and the same scale cut, we find: Ω m = 0.328 ± 0.009, Ω b = 0.05017 +0.0009 −0.0008 , S 8 = 0.862 ± 0.017, and h = 0.663 +0.006 −0.007 , and a 95% credible interval (CI) upper limit of m ν < 0.14 eV for a normal hierarchy. These results are competitive if not better than standard analyses with the same dataset, and demonstrate this should be a method of choice for future surveys, opening the door for their full exploitation in cross-correlations probes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.