No abstract
This publication describes the methods used to measure the centrality of inelastic Pb-Pb collisions at a center-of-mass energy of 2.76 TeV per colliding nucleon pair with ALICE. The centrality is a key parameter in the study of the properties of QCD matter at extreme temperature and energy density, because it is directly related to the initial overlap region of the colliding nuclei. Geometrical properties of the collision, such as the number of participating nucleons and the number of binary nucleon-nucleon collisions, are deduced from a Glauber model with a sharp impact parameter selection and shown to be consistent with those extracted from the data. The centrality determination provides a tool to compare ALICE measurements with those of other experiments and with theoretical calculations.
In this paper measurements are presented of π ± , K ± , p, andp production at midrapidity (|y| < 0.5), in Pb-Pb collisions at √ s NN = 2.76 TeV as a function of centrality. The measurement covers the transverse-momentum (p T ) range from 100, 200, and 300 MeV/c up to 3, 3, and 4.6 GeV/c for π , K, and p, respectively. The measured p T distributions and yields are compared to expectations based on hydrodynamic, thermal and recombination models. The spectral shapes of central collisions show a stronger radial flow than measured at lower energies, which can be described in hydrodynamic models. In peripheral collisions, the p T distributions are not well reproduced by hydrodynamic models. Ratios of integrated particle yields are found to be nearly independent of centrality. The yield of protons normalized to pions is a factor ∼1.5 lower than the expectation from thermal models.
ALICE is the heavy-ion experiment at the CERN Large Hadron Collider. The experiment continuously took data during the first physics campaign of the machine from fall 2009 until early 2013, using proton and lead-ion beams. In this paper we describe the running environment and the data handling procedures, and discuss the performance of the ALICE detectors and analysis methods for various physics observables.
We present a new generic framework which enables exact and fast evaluation of all multi-particle azimuthal correlations. The framework can be readily used along with a correction framework for systematic biases in anisotropic flow analyses due to various detector inefficiencies. A new recursive algorithm has been developed for higher order correlators for the cases where their direct implementation is not feasible. We propose and discuss new azimuthal observables for anisotropic flow analyses which can be measured for the first time with our new framework. The effect of finite detector granularity on multi-particle correlations is quantified and discussed in detail. We point out the existence of a systematic bias in traditional differential flow analyses which stems solely from the applied selection criteria on particles used in the analyses, and is also present in the ideal case when only flow correlations are present. Finally, we extend the applicability of our generic framework to the case of differential multi-particle correlations. PACS numbers: 25.75.Ld, 25.75.Gz, 05.70.Fh arXiv:1312.3572v2 [nucl-ex] 20 Dec 2013 p.d.f. of M particles for an event with multiplicity M was utilized in flow analyses for the first time. On the other hand, the very first experimental attempt to go beyond two-particle azimuthal correlations [4] date back to Bevalac work published in [5]. In that paper, a quantitative description of collectivity was attempted by generalizing the observable for two-particle correlations, namely the smaller angle between the transverse momenta of two produced particles, into the geometric mean of n (n > 2) azimuthal separations within the n-particle multiplet. However, it was realized immediately that the net contribution of low-order few-particle correlations is cumulative if one increases the number of particles in such multiplets, which triggered the demand for more sophisticated techniques that would instead suppress systematically such contributions for increasingly large multiplets [5].This was pursued further in a series of papers on multi-particle correlations and cumulants by Borghini et al (for a summary of the mathematical and statistical properties of cumulants we refer the reader to [6]). In the first paper of the series [7], Borghini et al defined cumulants in the context of flow analyses in terms of the moments of the distribution of the Q-vector amplitude [1,2,8]. As a landmark of their approach, the authors have introduced a formalism of generating functions accompanied with interpolation methods in the complex plane as the simplest and fastest way to calculate cumulants from experimental data. The formalism of generating functions is particularly robust against biases stemming from non-uniform detector acceptance, which is frequently the dominant systematic bias in anisotropic flow analyses. However, there were some serious drawbacks, which were recognized and discussed already by the authors in the original paper. Most notably, both two-and multi-particle cumulants were plagued ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.