Peculiar velocity measurements are the only tool available in the low-redshift Universe for mapping the large-scale distribution of matter and can thus be used to constrain cosmology. Using redshifts from the 2M++ redshift compilation, we reconstruct the density of galaxies within 200 h • , only 10• out of alignment with the Cosmic Microwave Background dipole. To account for velocity contributions arising from sources outside the 2M++ volume, we fit simultaneously for β * and an external bulk flow in our analysis. We find that an external bulk flow is preferred at the 5.1σ level, and the best fit has a velocity of 159 ± 23 km s −1 towards l = 304• . Finally, the predicted bulk flow of a 50 h −1 Mpc Gaussian-weighted volume centred on the Local Group is 230 ± 30 km s −1 , in the direction l = 293• , in agreement with predictions from ΛCDM.
Notesa The exact number will depend on the final LSST observing strategy and implementation.
Machine learning has been widely applied to clearly defined problems of astronomy and astrophysics. However, deep learning and its conceptual differences to classical machine learning have been largely overlooked in these fields. The broad hypothesis behind our work is that letting the abundant real astrophysical data speak for itself, with minimal supervision and no labels, can reveal interesting patterns that may facilitate discovery of novel physical relationships. Here, as the first step, we seek to interpret the representations a deep convolutional neural network chooses to learn, and find correlations in them with current physical understanding. We train an encoder–decoder architecture on the self-supervised auxiliary task of reconstruction to allow it to learn general representations without bias towards any specific task. By exerting weak disentanglement at the information bottleneck of the network, we implicitly enforce interpretability in the learned features. We develop two independent statistical and information-theoretical methods for finding the number of learned informative features, as well as measuring their true correlation with astrophysical validation labels. As a case study, we apply this method to a data set of ∼270 000 stellar spectra, each of which comprising ∼300 000 dimensions. We find that the network clearly assigns specific nodes to estimate (notions of) parameters such as radial velocity and effective temperature without being asked to do so, all in a completely physics-agnostic process. This supports the first part of our hypothesis. Moreover, we find with high confidence that there are ∼4 more independently informative dimensions that do not show a direct correlation with our validation parameters, presenting potential room for future studies.
In preparation for photometric classification of transients from the Legacy Survey of Space and Time (LSST) we run tests with different training data sets. Using estimates of the depth to which the 4-metre Multi-Object Spectroscopic Telescope (4MOST) Time Domain Extragalactic Survey (TiDES) can classify transients, we simulate a magnitude-limited sample reaching rAB ≈ 22.5 mag. We run our simulations with the software snmachine, a photometric classification pipeline using machine learning. The machine-learning algorithms struggle to classify supernovae when the training sample is magnitude-limited, in contrast to representative training samples. Classification performance noticeably improves when we combine the magnitude-limited training sample with a simulated realistic sample of faint, high-redshift supernovae observed from larger spectroscopic facilities; the algorithms’ range of average area under ROC curve (AUC) scores over 10 runs increases from 0.547–0.628 to 0.946–0.969 and purity of the classified sample reaches 95 per cent in all runs for 2 of the 4 algorithms. By creating new, artificial light curves using the augmentation software avocado, we achieve a purity in our classified sample of 95 per cent in all 10 runs performed for all machine-learning algorithms considered. We also reach a highest average AUC score of 0.986 with the artificial neural network algorithm. Having ‘true’ faint supernovae to complement our magnitude-limited sample is a crucial requirement in optimisation of a 4MOST spectroscopic sample. However, our results are a proof of concept that augmentation is also necessary to achieve the best classification results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.