We demonstrate the ability of Convolutional Neural Networks (CNNs) to mitigate systematics in the virial scaling relation and produce dynamical mass estimates of galaxy clusters with remarkably low bias and scatter. We present two models, CNN 1D and CNN 2D , which leverage this deep learning tool to infer cluster masses from distributions of member galaxy dynamics. Our first model, CNN 1D , infers cluster mass directly from the distribution of member galaxy line-of-sight velocities. Our second model, CNN 2D , extends the input space of CNN 1D to learn on the joint distribution of galaxy line-of-sight velocities and projected radial distances. We train each model as a regression over cluster mass using a labeled catalog of realistic mock cluster observations generated from the MultiDark simulation and UniverseMachine catalog. We then evaluate the performance of each model on an independent set of mock observations selected from the same simulated catalog. The CNN models produce cluster mass predictions with log-normal residuals of scatter as low as 0.127 dex, a factor of three improvement over the classical M-σ power law estimator. Furthermore, the CNN model reduces prediction scatter relative to similar machine learning approaches by up to 20% while executing in drastically shorter training and evaluation times (by a factor of 30) and producing considerably more robust mass predictions (improving prediction stability under variations in galaxy sampling rate by 53%).
Reducing the arithmetic precision of a computation has real performance implications, including increased speed, decreased power consumption, and a smaller memory footprint. For some architectures, e.g., GPUs, there can be such a large performance difference that using reduced precision is effectively a requirement. The tradeoff is that the accuracy of the computation will be compromised. In this paper we describe a proof assistant and associated static analysis techniques for efficiently bounding numerical and precision-related errors. The programmer/compiler can use these bounds to numerically verify and optimize an application for different input and machine configurations. We present several case study applications that demonstrate the effectiveness of these techniques and the performance benefits that can be achieved with rigorous precision analysis.
Cold dark matter model predicts that the large-scale structure grows hierarchically. Small dark matter halos form first. Then, they grow gradually via continuous merger and accretion. These halos host the majority of baryonic matter in the Universe in the form of hot gas and cold stellar phase. Determining how baryons are partitioned into these phases requires detailed modeling of galaxy formation and their assembly history. It is speculated that formation time of the same mass halos might be correlated with their baryonic content. To evaluate this hypothesis, we employ halos of mass above 10 14 M realized by TNG300 solution of the IllustrisTNG project. Formation time is not directly observable. Hence, we rely on the magnitude gap between the brightest and the fourth brightest halo galaxy member, which is shown that traces formation time of the host halo. We compute the conditional statistics of the stellar and gas content of halos conditioned on their total mass and magnitude gap. We find a strong correlation between magnitude gap and gas mass, BCG stellar mass, and satellite galaxies stellar mass, but not the total stellar mass of halo. Conditioning on the magnitude gap can reduce the scatter about halo property-halo mass relation and has a significant impact on the conditional covariance. Reduction in the scatter can be as significant as 30%, which implies more accurate halo mass prediction. Incorporating the magnitude gap has a potential to improve cosmological constraints using halo abundance and allows us to gain insight into the baryon evolution within these systems.
A growing body of work has established the modelling of stochastic processes as a promising area of application for quantum techologies; it has been shown that quantum models are able to replicate the future statistics of a stochastic process whilst retaining less information about the past than any classical model must -even for a purely classical process. Such memory-efficient models open a potential future route to study complex systems in greater detail than ever before, and suggest profound consequences for our notions of structure in their dynamics. Yet, to date methods for constructing these quantum models are based on having a prior knowledge of the optimal classical model. Here, we introduce a protocol for blind inference of the memory structure of quantum models -tailored to take advantage of quantum features -direct from time-series data, in the process highlighting the robustness of their structure to noise. This in turn provides a way to construct memory-efficient quantum models of stochastic processes whilst circumventing certain drawbacks that manifest solely as a result of classical information processing in classical inference protocols. *
Background There is evidence that food industry actors try to shape science on nutrition and physical activity. But they are also involved in influencing the principles of scientific integrity. Our research objective was to study the extent of that involvement, with a case study of ILSI as a key actor in that space. We conducted a qualitative document analysis, triangulating data from an existing scoping review, publicly available information, internal industry documents, and existing freedom of information requests. Results Food companies have joined forces through ILSI to shape the development of scientific integrity principles. These activities started in 2007, in direct response to the growing criticism of the food industry’s funding of research. ILSI first built a niche literature on COI in food science and nutrition at the individual and study levels. Because the literature was scarce on that topic, these publications were used and cited in ILSI’s and others’ further work on COI, scientific integrity, and PPP, beyond the fields of nutrition and food science. In the past few years, ILSI started to shape the very principles of scientific integrity then and to propose that government agencies, professional associations, non-for-profits, and others, adopt these principles. In the process, ILSI built a reputation in the scientific integrity space. ILSI’s work on scientific integrity ignores the risks of accepting corporate funding and fails to provide guidelines to protect from these risks. Conclusions The activities developed by ILSI on scientific integrity principles are part of a broader set of political practices of industry actors to influence public health policy, research, and practice. It is important to learn about and counter these practices as they risk shaping scientific standards to suit the industry’s interests rather than public health ones.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.