Abstract:We introduce the DNNLikelihood, a novel framework to easily encode, through deep neural networks (DNN), the full experimental information contained in complicated likelihood functions (LFs). We show how to efficiently parametrise the LF, treated as a multivariate function of parameters of interest and nuisance parameters with high dimensionality, as an interpolating function in the form of a DNN predictor. We do not use any Gaussian approximation or dimensionality reduction, such as marginalisation or profilin… Show more
“…The models are made by learning numbers based on experimental measurements (e.g. likelihoods, posteriors, confidence levels for exclusion) from training data given the parameters of the physical model and experimental nuisance parameters [31]. Such training data can come from experiments or the recasting tools discussed below.…”
Section: Comparison Of Reinterpretation Methodsmentioning
confidence: 99%
“…Such an approach has been proposed in Ref. [31], using a parameterisation that can encode complicated likelihoods with minimal loss of accuracy, in a lightweight, standard, and framework-independent format (e.g. ONNX) suitable for a wide range of reinterpretation applications.…”
Section: Full Likelihoodsmentioning
confidence: 99%
“…For scikit-learn, for example, one may serialise the entire model object with the pickle package in Python. Another option, in particular for neural networks, would be to store the model in an ONNX file [31]. The publication of trained ML models for HEP phenomenology is discussed in detail in the contribution by S. Caron et al in Ref.…”
We report on the status of efforts to improve the reinterpretation of searches and measurements at the LHC in terms of models for new physics, in the context of the LHC Reinterpretation Forum. We detail current experimental offerings in direct searches for new particles, measurements, technical implementations and Open Data, and provide a set of recommendations for further improving the presentation of LHC results in order to better enable reinterpretation in the future. We also provide a brief description of existing software reinterpretation frameworks and recent global analyses of new physics that make use of the current data.
“…The models are made by learning numbers based on experimental measurements (e.g. likelihoods, posteriors, confidence levels for exclusion) from training data given the parameters of the physical model and experimental nuisance parameters [31]. Such training data can come from experiments or the recasting tools discussed below.…”
Section: Comparison Of Reinterpretation Methodsmentioning
confidence: 99%
“…Such an approach has been proposed in Ref. [31], using a parameterisation that can encode complicated likelihoods with minimal loss of accuracy, in a lightweight, standard, and framework-independent format (e.g. ONNX) suitable for a wide range of reinterpretation applications.…”
Section: Full Likelihoodsmentioning
confidence: 99%
“…For scikit-learn, for example, one may serialise the entire model object with the pickle package in Python. Another option, in particular for neural networks, would be to store the model in an ONNX file [31]. The publication of trained ML models for HEP phenomenology is discussed in detail in the contribution by S. Caron et al in Ref.…”
We report on the status of efforts to improve the reinterpretation of searches and measurements at the LHC in terms of models for new physics, in the context of the LHC Reinterpretation Forum. We detail current experimental offerings in direct searches for new particles, measurements, technical implementations and Open Data, and provide a set of recommendations for further improving the presentation of LHC results in order to better enable reinterpretation in the future. We also provide a brief description of existing software reinterpretation frameworks and recent global analyses of new physics that make use of the current data.
The Matrix Element Method (MEM) is a powerful method to extract information from measured events at collider experiments. Compared to multivariate techniques built on large sets of experimental data, the MEM does not rely on an examples-based learning phase but directly exploits our knowledge of the physics processes. This comes at a price, both in term of complexity and computing time since the required multi-dimensional integral of a rapidly varying function needs to be evaluated for every event and physics process considered. This can be mitigated by optimizing the integration, as is done in the MoMEMta package, but the computing time remains a concern, and often makes the use of the MEM in full-scale analysis unpractical or impossible. We investigate in this paper the use of a Deep Neural Network (DNN) built by regression of the MEM integral as an ansatz for analysis, especially in the search for new physics.
“…[45, 67-69, 72, 73, 111-113] for decorrelation techniques, Ref. [24,[28][29][30][31]33,[41][42][43]66,100,[114][115][116][117][118][119] for inference, Ref. [3,4,40,60,109, for tagging, and various other applications in Ref.…”
Deep learning tools can incorporate all of the available information into a search for new particles, thus making the best use of the available data. This paper reviews how to optimally integrate information with deep learning and explicitly describes the corresponding sources of uncertainty. Simple illustrative examples show how these concepts can be applied in practice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.