In the Simulations and Constructions of the Reionization of Cosmic Hydrogen (SCORCH) project, we present new radiation-hydrodynamic simulations with updated high-redshift galaxy populations and varying radiation escape fractions. The simulations are designed to have fixed Thomson optical depth τ ≈ 0.06, consistent with recent Planck observations, and similar midpoints of reionization 7.5 z 8.0, but with different ionization histories. The galaxy luminosity functions and ionizing photon production rates in our model are in good agreement with recent HST observations. Adopting a power-law form for the radiation escape fraction f esc (z) = f 8 [(1 + z)/9] a8 , we simulate the cases for a 8 = 0, 1, and 2 and find a 8 2 in order to end reionization in the range 5.5 z 6.5 to be consistent with Lyman alpha forest observations. At fixed τ and as the power-law slope a 8 increases, the reionization process starts earlier but ends later with a longer duration ∆z and the decreased redshift asymmetry Az. We find a range of durations 3.9 ∆z 4.6 that is currently in tension with the upper limit ∆z < 2.8 inferred from a recent joint analysis of Planck and South Pole Telescope observations.
Within the next few years, the Square Kilometer Array (SKA) or one of its pathfinders will hopefully provide a detection of the 21-cm signal fluctuations from the Epoch of Reionization (EoR). Then, the main goal will be to accurately constrain the underlying astrophysical parameters. Currently, this is mainly done with Bayesian inference using Markov Chain Monte Carlo sampling. Recently, studies using neural networks trained to performed inverse modelling have shown interesting results. We build on these by improving the accuracy of the predictions using neural network and exploring other supervised learning methods: the kernel and ridge regressions. Based on a large training set of 21-cm power spectra, we compare the performances of these supervised learning methods. When using an un-noised signal as input, we improve on previous neural network accuracy by one order of magnitude and, using local ridge kernel regression, we gain another factor of a few. We then reach a rms prediction error of a few percents of the 1-sigma confidence level due to SKA thermal noise (as estimated with Bayesian inference). This last performance level requires optimizing the hyper-parameters of the method: how to do that perfectly in the case of an unknown signal remains an open question. For an input signal altered by a SKA-type thermal noise, our neural network recovers the astrophysical parameter values with an error within half of the 1σ confidence level due to the SKA thermal noise. This accuracy improves to 10% of the 1σ level when using the local ridge kernel regression (with optimized hyper-parameters). We are thus reaching a performance level where supervised learning methods are a viable alternative to determine the best-fit parameters values.
With a statistical detection of the 21 cm signal fluctuations from the Epoch of Reionization (EoR) expected in the next few years, there is an interest in developing robust and precise techniques to constrain the underlying astrophysical parameters. Bayesian inference with Markov Chain Monte Carlo, or different types of supervised learning for backward modelling (from signal to parameters) are examples of such techniques. They usually require many instances of forward modelling (from parameters to signal) in sampling the parameters space, either when performing the steps of the Markov Chain or when building a training sample for supervised learning. As forward modelling can be costly (if performed with numerical simulations for example), we should attempt to perform an optimal sampling according to some principle. With this goal in mind, we present an approach based on defining a metric on the space of observables, induced by the manner through which the modelling creates a mapping from the parameter space onto the space of observables. This metric bears a close connection to Jeffreys' prior from information theory. It is used to generate a homogeneous and isotropic sampling of the signal space with two different methods. We show that when the resulting optimized samplings, created with 21cmFAST, are used to train a neural network we obtain a modest reduction of the error on parameter reconstruction of ∼10% (compared to a naïve sampling of the same size). Excluding the borders of the parameter space region, the improvement is more substantial, on the order of 30-40%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.