We present a global fit to all data on the suppression of high energy jets and high energy hadrons in the most central heavy ion collisions at the LHC for two different collision energies, within a hybrid strong/weak coupling quenching model. Even though the measured suppression factors for hadrons and jets differ significantly from one another and appear to asymptote to different values in the high energy limit, we obtain a simultaneous description of all these data after constraining the value of a single model parameter. We use our model to investigate the origin of the difference between the observed suppression of jets and hadrons and relate it, quantitatively, to the observed modification of the jet fragmentation function in jets that have been modified by passage through the medium produced in heavy ion collisions. In particular, the observed increase in the fraction of hard fragments in medium-modified jets, which indicates that jets with the fewest hardest fragments lose the least energy, corresponds quantitatively to the observed difference between the suppression of hadrons and jets. We argue that a harder fragmentation pattern for jets with a given energy after quenching is a generic feature of any mechanism for the interaction between jets and the medium that they traverse that yields a larger suppression for wider jets. We also compare the results of our global fit to LHC data to measurements of the suppression of high energy hadrons in RHIC collisions, and find that with its parameter chosen to fit the LHC data our model is inconsistent with the RHIC data at the 3σ level, suggesting that hard probes interact more strongly with the less hot quark-gluon plasma produced at RHIC. PACS numbers:Introduction. One of the most striking observations of the heavy ion physics programs of both RHIC and the LHC is the suppression in the measured yield of high-energy jets and hadrons in ultrelativistic nucleus-nucleus collisions relative to the expected yield if these collisions were just an incoherent superposition of independent nucleon-nucleon collisions. This phenomenon, which is generically known as jet quenching, is a direct consequence of the energy loss experienced by the high-energy partons that form jets and subsequently decay into hadrons as these partons traverse the strongly coupled quark-gluon plasma (QGP) produced in the same heavy ion collisions. Since such parton-medium interactions have the potential to provide tomographic information about the microscopic properties of QGP, the study of the suppression patterns of different energetic probes has been the subject of considerable experimental and theoretical research. For recent reviews, see Refs. [1][2][3][4].
Within the context of a hybrid strong/weak coupling model of jet quenching, we study the consequences of the fact that the plasma produced in a heavy ion collision cannot resolve the substructure of a collimated parton shower propagating through it with arbitrarily fine spatial resolution. We introduce a screening length parameter, L res , proportional to the inverse of the local temperature in the plasma, estimating a range for the value of the proportionality constant via comparing weakly coupled QCD calculations and holographic calculations appropriate in strongly coupled plasma. We then modify the hybrid model so that when a parton in a jet shower splits, its two offspring are initially treated as unresolved, and are only treated as two separate partons losing energy independently after they are separated by a distance L res . This modification delays the quenching of partons with intermediate energy, resulting in the survival of more hadrons in the final state with p T in the several GeV range. We analyze the consequences of different choices for the value of the resolution length, L res , and demonstrate that introducing a nonzero L res results in modifications to the jet shapes and jet fragmentations functions, as it makes it more probable for particles carrying a small fraction of the jet energy at larger angles from the jet axis to survive their passage through the quark-gluon plasma. These effects are, however, small in magnitude, something that we confirm via checking for effects on missing-p T observables. arXiv:1707.05245v1 [hep-ph]
DUNE is a dual-site experiment for long-baseline neutrino oscillation studies, neutrino astrophysics and nucleon decay searches. ProtoDUNE Dual Phase (DP) is a 6 $$\times $$ × 6 $$\times $$ × 6 m$$^3$$ 3 liquid argon time-projection-chamber (LArTPC) that recorded cosmic-muon data at the CERN Neutrino Platform in 2019–2020 as a prototype of the DUNE Far Detector. Charged particles propagating through the LArTPC produce ionization and scintillation light. The scintillation light signal in these detectors can provide the trigger for non-beam events. In addition, it adds precise timing capabilities and improves the calorimetry measurements. In ProtoDUNE-DP, scintillation and electroluminescence light produced by cosmic muons in the LArTPC is collected by photomultiplier tubes placed up to 7 m away from the ionizing track. In this paper, the ProtoDUNE-DP photon detection system performance is evaluated with a particular focus on the different wavelength shifters, such as PEN and TPB, and the use of Xe-doped LAr, considering its future use in giant LArTPCs. The scintillation light production and propagation processes are analyzed and a comparison of simulation to data is performed, improving understanding of the liquid argon properties.
The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype.
In the context of the hybrid strong/weak coupling model for jet quenching, we perform a global fit to hadron and jet data in the most central bins both at RHIC and LHC. The qualitative and quantitative success of the analysis is attributed to the fact that the model correctly captures the fact that wider jets lose, on average, more energy than the narrower ones, to which high energy hadrons belong. We show how one can understand the relative jet and hadron suppression by analyzing the jet fragmentation functions, and also discuss the role of plasma finite resolution effects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.