The Fortran LHAPDF library has been a longterm workhorse in particle physics, providing standardised access to parton density functions for experimental and phenomenological purposes alike, following on from the venerable PDFLIB package. During Run 1 of the LHC, however, several fundamental limitations in LHAPDF's design have became deeply problematic, restricting the usability of the library for important physics-study procedures and providing dangerous avenues by which to silently obtain incorrect results. In this paper we present the LHAPDF 6 library, a ground-up re-engineering of the PDFLIB/LHAPDF paradigm for PDF access which removes all limits on use of concurrent PDF sets, massively reduces static memory requirements, offers improved CPU performance, and fixes fundamental bugs in multi-set access to PDF metadata. The new design, restricted for now to interpolated PDFs, uses centralised numerical routines and a powerful cascading metadata system to decouple software releases from provision of new PDF data and allow completely general parton content. More than 200 PDF sets have been migrated from LHAPDF 5 to the new universal data format, via a stringent quality control procedure. LHAPDF 6 is supported by many Monte Carlo generators and other physics programs, in some cases via a full set of compatibility routines, and is recommended for the demanding PDF access needs of LHC Run 2 and beyond.
International audienceThis document outlines a set of simplified models for dark matter and its interactions with Standard Model particles. It is intended to summarize the main characteristics that these simplified models have when applied to dark matter searches at the LHC, and to provide a number of useful expressions for reference. The list of models includes both s-channel and t-channel scenarios. For s-channel, spin-0 and spin-1 mediation is discussed, and also realizations where the Higgs particle provides a portal between the dark and visible sectors. The guiding principles underpinning the proposed simplified models are spelled out, and some suggestions for implementation are presented
This report of the BOOST2012 workshop presents the results of four working groups that studied key aspects of jet substructure. We discuss the potential of firstprinciple QCD calculations to yield a precise description of the substructure of jets and study the accuracy of state-ofthe-art Monte Carlo tools. Limitations of the experiments' ability to resolve substructure are evaluated, with a focus on the impact of additional (pile-up) proton proton collisions on jet substructure performance in future LHC operating scenarios. A final section summarizes the lessons learnt from jet substructure analyses in searches for new physics in the production of boosted top quarks.
The search for di-Higgs production at the LHC in order to set limits on the Higgs trilinear coupling and constraints on new physics is one of the main motivations for the LHC high-luminosity phase. Recent experimental analyses suggest that such analyses will only be successful if information from a range of channels is included. We therefore investigate di-Higgs production in association with two hadronic jets and give a detailed discussion of both the gluonand the weak boson-fusion (WBF) contributions, with a particular emphasis on the phenomenology with modified Higgs trilinear and quartic gauge couplings. We perform a detailed investigation of the full hadronic final state and find that hh j j production should add sensitivity to a di-Higgs search combination at the HL-LHC with 3 ab −1 . Since the WBF and GF contributions are sensitive to different sources of physics beyond the Standard Model, we devise search strategies to disentangle and isolate these production modes. While gluon fusion remains non-negligible in WBF-type selections, sizeable new physics contributions to the latter can still be constrained. As an example of the latter point we investigate the sensitivity that can be obtained for a measurement of the quartic Higgs-gauge boson couplings.
Based on the established task of identifying boosted, hadronically decaying top quarks, we compare a wide range of modern machine learning approaches. Unlike most established methods they rely on low-level input, for instance calorimeter output. While their network architectures are vastly different, their performance is comparatively similar. In general, we find that these new approaches are extremely powerful and great fun.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.