We present a new set of parton distribution functions (PDFs) based on a fully global dataset and machine learning techniques: NNPDF4.0. We expand the NNPDF3.1 determination with 44 new datasets, mostly from the LHC. We derive a novel methodology through hyperparameter optimization, leading to an efficient fitting algorithm built upon stochastic gradient descent. We use NNLO QCD calculations and account for NLO electroweak corrections and nuclear uncertainties. Theoretical improvements in the PDF description include a systematic implementation of positivity constraints and integrability of sum rules. We validate our methodology by means of closure tests and “future tests” (i.e. tests of backward and forward data compatibility), and assess its stability, specifically upon changes of PDF parametrization basis. We study the internal compatibility of our dataset, and investigate the dependence of results both upon the choice of input dataset and of fitting methodology. We perform a first study of the phenomenological implications of NNPDF4.0 on representative LHC processes. The software framework used to produce NNPDF4.0 is made available as an open-source package together with documentation and examples.
We present the software framework underlying the NNPDF4.0 global determination of parton distribution functions (PDFs). The code is released under an open source licence and is accompanied by extensive documentation and examples. The code base is composed by a PDF fitting package, tools to handle experimental data and to efficiently compare it to theoretical predictions, and a versatile analysis framework. In addition to ensuring the reproducibility of the NNPDF4.0 (and subsequent) determination, the public release of the NNPDF fitting framework enables a number of phenomenological applications and the production of PDF fits under user-defined data and theory assumptions.
We study the correlation between different sets of parton distributions (PDFs). Specifically, viewing different PDF sets as distinct determinations, generally correlated, of the same underlying physical quantity, we examine the extent to which the correlation between them is due to the underlying data. We do this both for pairs of PDF sets determined using a given fixed methodology, and between sets determined using different methodologies. We show that correlations have a sizable component that is not due to the underlying data, because the data do not determine the PDFs uniquely. We show that the data-driven correlations can be used to assess the efficiency of methodologies used for PDF determination. We also show that the use of data-driven correlations for the combination of different PDFs into a joint set can lead to inconsistent results, and thus that the statistical combination used in constructing the widely used PDF4LHC15 PDF set remains the most reliable method.
We present the software framework underlying the NNPDF4.0 global determination of parton distribution functions (PDFs). The code is released under an open source licence and is accompanied by extensive documentation and examples. The code base is composed by a PDF fitting package, tools to handle experimental data and to efficiently compare it to theoretical predictions, and a versatile analysis framework. In addition to ensuring the reproducibility of the NNPDF4.0 (and subsequent) determination, the public release of the NNPDF fitting framework enables a number of phenomenological applications and the production of PDF fits under user-defined data and theory assumptions.
Since the first determination of a structure function many decades ago, all methodologies used to determine structure functions or parton distribution functions (PDFs) have employed a common prefactor as part of the parametrization. The NNPDF collaboration pioneered the use of neural networks to overcome the inherent bias of constraining the space of solution with a fixed functional form while still keeping the same common prefactor as a preprocessing. Over the years various, increasingly sophisticated, techniques have been introduced to counter the effect of the prefactor on the PDF determination. In this paper we present a methodology to perform a data-based scaling of the Bjorken x input parameter which facilitates the removal the prefactor, thereby significantly simplifying the methodology, without a loss of efficiency and finding good agreement with previous results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.