2019
DOI: 10.1051/epjconf/201921403010
|View full text |Cite
|
Sign up to set email alerts
|

Overview of the ATLAS distributed computing system

Abstract: The CERN ATLAS experiment successfully uses a worldwide computing infrastructure to support the physics program during LHC Run 2. The Grid workflow system PanDA routinely manages 250 to 500 thousand concurrently running production and analysis jobs to process simulation and detector data. In total more than 370 PB of data is distributed over more than 150 sites in the WLCG and handled by the ATLAS data management system Rucio. To prepare for the ever growing LHC luminosity in future runs new developments are u… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5

Relationship

3
2

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 5 publications
0
8
0
Order By: Relevance
“…The HCDC model targets the bandwidth and access latency bottlenecks of the data carousel model. The HCDC model can be used by the continuous ATLAS derivation production workflow [1]. In this model only tape storage would contain permanent replicas of the input data.…”
Section: Simulation Of Cloud Data Managementmentioning
confidence: 99%
See 1 more Smart Citation
“…The HCDC model targets the bandwidth and access latency bottlenecks of the data carousel model. The HCDC model can be used by the continuous ATLAS derivation production workflow [1]. In this model only tape storage would contain permanent replicas of the input data.…”
Section: Simulation Of Cloud Data Managementmentioning
confidence: 99%
“…The distributed computing system [1] of the ATLAS experiment [2] at the LHC is built around two main components: the workflow management system PanDA [3] and the data management system Rucio [4]. The involved systems manage the computing resources to process the detector data at the Tier-0 at CERN, reprocesses it periodically at the distributed Tier-1 and Tier-2 Worldwide LHC Computing Grid (WLCG) [5] sites, and runs continuous Monte Carlo (MC) simulation and reconstruction.…”
Section: Introductionmentioning
confidence: 99%
“…Figure 1: At left: the WLCG infrastructure which, across its Tier0, Tier1, and Tier2 centers, provided the bulk of the computing capacity for LHC Run 1 and Run 2 processing, storage, simulation and analysis. At right, distributed computing system components (in this case ATLAS [3]) which are typical of the LHC experiments.…”
Section: Scale Of Operationsmentioning
confidence: 99%
“…The distributed computing system of the ATLAS experiment [1] at the LHC is built around the two main components: the workflow management system PanDA and the data manage- ment system Rucio [2]. It manages the computing resources to process the detector data at the Tier-0 at CERN, reprocesses it once per year at the Tier-1 and Tier-2 Worldwide LHC Computing Grid (WLCG) [3] sites and runs continuous Monte Carlo (MC) simulation and reconstruction.…”
Section: Introductionmentioning
confidence: 99%