Measurements of inclusive charged-hadron transverse-momentum and pseudorapidity distributions are presented for proton-proton collisions at √ s = 0.9 and 2.36 TeV. The data were collected with the CMS detector during the LHC commissioning in December 2009. For non-single-diffractive interactions, the average charged-hadron transverse momentum is measured to be 0.46 ± 0.01 (stat.) ± 0.01 (syst.) GeV/c at 0.9 TeV and 0.50 ± 0.01 (stat.) ± 0.01 (syst.) GeV/c at 2.36 TeV, for pseudorapidities between −2.4 and +2.4. At these energies, the measured pseudorapidity densities in the central region, dN ch /dη| |η|<0.5 , are 3.48 ± 0.02 (stat.) ± 0.13 (syst.) and 4.47 ± 0.04 (stat.) ± 0.16 (syst.), respectively. The results at 0.9 TeV are in agreement with previous measurements and confirm the expectation of near equal hadron production in pp and pp collisions. The results at 2.36 TeV represent the highest-energy measurements at a particle collider to date.
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ ĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀĀ
The construction of a well tuned probability distributions is illustrated in synthetic way, these probability distributions are optimized to produce a faithful realizations of the impact point distributions of particles in silicon strip detector. Their use for track fitting shows a drastic improvements of a factor two, for the low noise case, and a factor three, for the high noise case, respect to the standard approach. The tracks are well reconstructed even in presence of hits with large errors, with a surprising effect of hit discarding. The applications illustrated are simulations of the PAMELA tracker, but other type of trackers can be handled similarly. The probability distributions are calculated for the center of gravity algorithms, and they are very different from gaussian probabilities. These differences are crucial to accurately reconstruct tracks with high error hits and to produce the effective discarding of the too noisy hits (outliers). The similarity of our distributions with the Cauchy distribution forced us to abandon the standard deviation for our comparisons and instead use the full width at half maximum. A set of mathematical approaches must be developed for these applications, some of them are standard in wide sense, even if very complex. One is essential and, in its absence, all the others are useless. Therefore, in this paper, we report the details of this critical approach. It extracts physical properties of the detectors, and allows the insertion of the functional dependence from the impact point in the probability distributions. Other papers will be dedicated to the remaining parts.
A new fitting method is explored for momentum reconstruction. The tracker model reproduces a set of silicon micro-strip detectors in a constant magnetic field. The new fitting method gives substantial increases of momentum resolution respect to standard fit. The key point is the use of a realistic probability distribution for each hit (heteroscedasticity). Two different methods are used for the fits, the first method introduces an effective variance as weight for each hit, the second method uses the search of the maximum likelihood. The tracker model is similar to the PAMELA tracker with its two sided detectors. The two detector sides have very different properties and quality. Each side is simulated as momentum reconstruction device. One of the two is similar to silicon micro-strip detectors of large use in running experiments. Two different position reconstructions are used for the standard fits, the η-algorithm (the best one) and the two-strip center of gravity. The gain obtained in momentum resolution is measured as the virtual magnetic field and the virtual signal-to-noise ratio required by the two standard fits to reach an overlap with the best of two new methods. For the low noise side, the virtual magnetic field must be increased 1.5 times respect to the real field to reach the overlap and 1.8 for the other. For the high noise side, the increases must be 1.8 and 2.0. The virtual signal-to-noise ratio has to be increased by 1.6 for the low noise side and 2.2 for the high noise side (η-algorithms). Changes of the signal-to-noise ratio has no effect on the fits with the center of gravity as position-algorithm. The momentum resolution is simulated even in function of the number N of the detection layers. A very rapid linear increase with N is observed for our two methods, the two standard fits has the usual grow as √ N. Other interesting effects are obtained selecting tracks with good or excellent hits. KEYWORDS: Performance of High Energy Physics Detectors, Pattern recognition, cluster finding, calibration and fitting methods, Si microstrip and pad detectors, Analysis and statistical methods. * Corresponding author. the most recent ones of a very large set) are addressed to the first three effects. The handling of the full non linearity of the particle path is clearly described in ref. [3].This work is addressed to an accurate study of the statistical effects in the positioning algorithms of minimum ionizing particle (MIP) and how to reduce at minimum their influence in the momentum reconstruction. For this task, realistic probability density functions (PDFs; probability distributions in the physicist jargon) for the errors of the positioning algorithms must be used. This strategy allows to go beyond the results of the standard least squares method or other fitting method that neglects the hit-statistical properties, or the heteroscedasticity (as it called in literature). For our task, we will simulate momentum values where the multiple scattering is unimportant, similarly for the energy loss (principally fo...
The Cramer–Rao–Frechet inequality is reviewed and extended to track fitting. A diffused opinion attributes to this inequality the limitation of the resolution of the track fits with the number N of observations. It will be shown that this opinion is incorrect, the weighted least squares method is not subjected to that N-limitation and the resolution can be improved beyond those limits. In previous publications, simulations with realistic models and simple Gaussian models produced interesting results: linear growths of the peaks of the distributions of the fitted parameters with the number N of observations, much faster than the N of the standard least-squares. These results could be considered a violation of a well-known 1 / N -rule for the variance of an unbiased estimator, frequently reported as the Cramer–Rao–Frechet bound. To clarify this point beyond any doubt, a direct proof of the consistency of those results with this inequality would be essential. Unfortunately, such proof is lacking. Hence, the Cramer–Rao–Frechet developments are applied to prove the efficiency (optimality) of the simple Gaussian model and the consistency of the linear growth. The inequality remains valid even for irregular models supporting the similar improvement of resolution for the realistic models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.