In recent years, breakthroughs in methods and data have enabled gravitational time delays to emerge as a very powerful tool to measure the Hubble constant H0. However, published state-of-the-art analyses require of order 1 year of expert investigator time and up to a million hours of computing time per system. Furthermore, as precision improves, it is crucial to identify and mitigate systematic uncertainties. With this time delay lens modelling challenge we aim to assess the level of precision and accuracy of the modelling techniques that are currently fast enough to handle of order 50 lenses, via the blind analysis of simulated datasets . The results in Rung 1 and Rung 2 show that methods that use only the point source positions tend to have lower precision ($10-20\%$) while remaining accurate. In Rung 2, the methods that exploit the full information of the imaging and kinematic datasets can recover H0 within the target accuracy (|A| < 2%) and precision (<6% per system), even in the presence of a poorly known point spread function and complex source morphology. A post-unblinding analysis of Rung 3 showed the numerical precision of the ray-traced cosmological simulations to be insufficient to test lens modelling methodology at the percent level, making the results difficult to interpret. A new challenge with improved simulations is needed to make further progress in the investigation of systematic uncertainties. For completeness, we present the Rung 3 results in an appendix and use them to discuss various approaches to mitigating against similar subtle data generation effects in future blind challenges.
We present a determination of the Hubble constant from the joint, free-form analysis of 8 strongly, quadruply lensing systems. In the concordance cosmology, we find $H_0{} = 71.8^{+3.9}_{-3.3}\, \mathrm{km}\, \mathrm{s}^{-1}\, \mathrm{Mpc}^{-1}{}{}$ with a precision of $4.97\%$. This is in agreement with the latest measurements from Supernovae Type Ia and Planck observations of the cosmic microwave background. Our precision is lower compared to these and other recent time-delay cosmography determinations, because our modelling strategies reflect the systematic uncertainties of lensing degeneracies. We furthermore are able to find reasonable lensed image reconstructions by constraining to either value of H0 from local and early Universe measurements. This leads us to conclude that current lensing constraints on H0 are not strong enough to break the “Hubble tension” problem of cosmology.
Bubble chambers and droplet detectors used in dosimetry and dark matter particle search experiments use a superheated metastable liquid in which nuclear recoils trigger bubble nucleation. This process is described by the classical heat spike model of F. Seitz [Phys. Fluids (1958-1988) 1, 2 (1958)PFLDAS0031-917110.1063/1.1724333], which uses classical nucleation theory to estimate the amount and the localization of the deposited energy required for bubble formation. Here we report on direct molecular dynamics simulations of heat-spike-induced bubble formation. They allow us to test the nanoscale process described in the classical heat spike model. 40 simulations were performed, each containing about 20 million atoms, which interact by a truncated force-shifted Lennard-Jones potential. We find that the energy per length unit needed for bubble nucleation agrees quite well with theoretical predictions, but the allowed spike length and the required total energy are about twice as large as predicted. This could be explained by the rapid energy diffusion measured in the simulation: contrary to the assumption in the classical model, we observe significantly faster heat diffusion than the bubble formation time scale. Finally we examine α-particle tracks, which are much longer than those of neutrons and potential dark matter particles. Empirically, α events were recently found to result in louder acoustic signals than neutron events. This distinction is crucial for the background rejection in dark matter searches. We show that a large number of individual bubbles can form along an α track, which explains the observed larger acoustic amplitudes.
In the coming years, strong gravitational lens discoveries are expected to increase in frequency by two orders of magnitude. Lens-modelling techniques are being developed to prepare for the coming massive influx of new lens data, and blind tests of lens reconstruction with simulated data are needed for validation. In this paper we present a systematic blind study of a sample of 15 simulated strong gravitational lenses from the EAGLE suite of hydrodynamic simulations. We model these lenses with a free-form technique and evaluate reconstructed mass distributions using criteria based on shape, orientation, and lensed image reconstruction. Especially useful is a lensing analogue of the Roche potential in binary star systems, which we call the lensing Roche potential. This we introduce in order to factor out the well-known problem of steepness or masssheet degeneracy. Einstein radii are on average well recovered with a relative error of ∼ 5% for quads and ∼ 25% for doubles; the position angle of ellipticity is on average also reproduced well up to ±10 • , but the reconstructed mass maps tend to be too round and too shallow. It is also easy to reproduce the lensed images, but optimising on this criterion does not guarantee better reconstruction of the mass distribution.
The study of strong-lensing systems conventionally involves constructing a mass distribution that can reproduce the observed multiply-imaging properties. Such mass reconstructions are generically non-unique. Here, we present an alternative strategy: instead of modelling the mass distribution, we search cosmological galaxy-formation simulations for plausible matches. In this paper we test the idea on seven well-studied lenses from the SLACS survey. For each of these, we first pre-select a few hundred galaxies from the EAGLE simulations, using the expected Einstein radius as an initial criterion. Then, for each of these pre-selected galaxies, we fit for the source light distribution, while using MCMC for the placement and orientation of the lensing galaxy, so as to reproduce the multiple images and arcs. The results indicate that the strategy is feasible and can easily reject unphysical galaxy-formation scenarios. It even yields relative posterior probabilities of two different galaxy-formation scenarios, though these are not statistically significant yet. Extensions to other observables, such as kinematics and colours of the stellar population in the lensing galaxy, is straightforward in principle, though we have not attempted it yet. Scaling to arbitrarily large numbers of lenses also appears feasible. This will be especially relevant for upcoming wide-field surveys, through which the number of galaxy lenses will rise possibly a hundredfold, which will overwhelm conventional modelling methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.