A description is provided of the software algorithms developed for the CMS tracker both for reconstructing charged-particle trajectories in proton-proton interactions and for using the resulting tracks to estimate the positions of the LHC luminous region and individual primary-interaction vertices. Despite the very hostile environment at the LHC, the performance obtained with these algorithms is found to be excellent. For tt events under typical 2011 pileup conditions, the average trackreconstruction efficiency for promptly-produced charged particles with transverse momenta of p T > 0.9 GeV is 94% for pseudorapidities of |η| < 0.9 and 85% for 0.9 < |η| < 2.5. The inefficiency is caused mainly by hadrons that undergo nuclear interactions in the tracker material. For isolated muons, the corresponding efficiencies are essentially 100%. For isolated muons of p T = 100 GeV emitted at |η| < 1.4, the resolutions are approximately 2.8% in p T , and respectively, 10 µm and 30 µm in the transverse and longitudinal impact parameters. The position resolution achieved for reconstructed primary vertices that correspond to interesting pp collisions is 10-12 µm in each of the three spatial dimensions. The tracking and vertexing software is fast and flexible, and easily adaptable to other functions, such as fast tracking for the trigger, or dedicated tracking for electrons that takes into account bremsstrahlung.
The central component of the CMS detector is the largest silicon tracker ever built. The precise alignment of this complex device is a formidable challenge, and only achievable with a significant extension of the technologies routinely used for tracking detectors in the past. This article describes the full-scale alignment procedure as it is used during LHC operations. Among the specific features of the method are the simultaneous determination of up to 200 000 alignment parameters with tracks, the measurement of individual sensor curvature parameters, the control of systematic misalignment effects, and the implementation of the whole procedure in a multiprocessor environment for high execution speed. Overall, the achieved statistical accuracy on the module alignment is found to be significantly better than 10 µm.
Abstract-We describe an important advancement for the Associative Memory device (AM). The AM is a VLSI processor for pattern recognition based on Content Addressable Memory (CAM) architecture. The AM is optimized for on-line track finding in high-energy physics experiments. Pattern matching is carried out by finding track candidates in coarse resolution "roads". A large AM bank stores all trajectories of interest, called "patterns", for a given detector resolution. The AM extracts roads compatible with a given event during detector read-out.Two important variables characterize the quality of the AM bank: its "coverage" and the level of fake roads. The coverage, which describes the geometric efficiency of a bank, is defined as the fraction of tracks that match at least one pattern in the bank. Given a certain road size, the coverage of the bank can be increased just adding patterns to the bank, while the number of fakes unfortunately is roughly proportional to the number of patterns in the bank. Moreover, as the luminosity increases, the fake rate increases rapidly because of the increased silicon occupancy. To counter that, we must reduce the width of our roads. If we decrease the road width using the current technology, the system will become very large and extremely expensive.We propose an elegant solution to this problem: the "variable resolution patterns". Each pattern and each detector layer within a pattern will be able to use the optimal width, but we will use a "don't care" feature (inspired from ternary CAMs) to increase the width when that is more appropriate. In other words we can use patterns of variable shape.As a result we reduce the number of fake roads, while keeping the efficiency high and avoiding excessive bank size due to the reduced width.We describe the idea, the implementation in the new AM design and the implementation of the algorithm in the simulation. Finally we show the effectiveness of the "variable resolution patterns" idea using simulated high occupancy events in the ATLAS detector.
As the LHC luminosity is ramped up to 3 x 10(34) cm(2) s(-1) and beyond, the high rates, multiplicities, and energies of particles seen by the detectors will pose a unique challenge. Only a tiny fraction of the produced collisions can be stored offline and immense real-time data reduction is needed. An effective trigger system must maintain high trigger efficiencies for the physics we are most interested in while suppressing the enormous QCD backgrounds. This requires massive computing power to minimize the online execution time of complex algorithms. A multi-level trigger is an effective solution to meet this challenge. The Fast Tracker (FTK) is an upgrade to the current ATLAS trigger system that will operate at full Level-1 output rates and provide high-quality tracks reconstructed over the entire inner detector by the start of processing in the Level-2 Trigger. FTK solves the combinatorial challenge inherent to tracking by exploiting the massive parallelism of associative memories that can compare inner detector hits to millions of pre-calculated patterns simultaneously. The tracking problem within matched patterns is further simplified by using pre-computed linearized fitting constants and relying on fast DSPs in modern commercial FPGAs. Overall, FTK is able to compute the helix parameters for all tracks in an event and apply quality cuts in less than 100 mu s. The system design is defined and the performance presented with respect to high transverse momentum (high-p(T)) Level-2 objects: b jets, tau jets, and isolated leptons. We test FTK algorithms using the full ATLAS simulation with WH events up to 3 x 10(34) cm(2) s(1) luminosity and compare the FTK results with the offline tracking capability. We present the architecture and the reconstruction performance for the mentioned high-p(T) Level-2 objects
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.