The LHCb experiment is dedicated to precision measurements of CP violation and rare decays of B hadrons at the Large Hadron Collider (LHC) at CERN (Geneva). The initial configuration and expected performance of the detector and associated systems, as established by test beam measurements and simulation studies, is described.
Abstract. Storage Services are crucial components of the Worldwide LHC Computing Grid Infrastructure spanning more than 200 sites and serving computing and storage resources to the High Energy Physics LHC communities. Up to tens of Petabytes of data are collected every year by the four LHC experiments at CERN. To process these large data volumes it is important to establish a protocol and a very efficient interface to the various storage solutions adopted by the WLCG sites. In this work we report on the experience acquired during the definition of the Storage Resource Manager v2.2 protocol. In particular, we focus on the study performed to enhance the interface and make it suitable for use by the WLCG communities. At the moment 5 different storage solutions implement the SRM v2.2 interface: BeStMan (LBNL), CASTOR (CERN and RAL), dCache (DESY and FNAL), DPM (CERN), and StoRM (INFN and ICTP). After a detailed inside review of the protocol, various test suites have been written identifying the most effective set of tests: the S2 test suite from CERN and the SRMTester test suite from LBNL. Such test suites have helped verifying the consistency and coherence of the proposed protocol and validating existing implementations. We conclude our work describing the results achieved. 2 IntroductionThe Worldwide LHC Computing Grid (WLCG) [1] Infrastructure is the largest Grid in the world, including about 230 sites worldwide [2]. It has been mainly established to support the 4 Large Hadron Collider (LHC) experiments at CERN. The LHC is the world's biggest machine to study the fundamental properties of sub-atomic particles and is due to start operating in 2008.The goal of the WLCG project is to establish a world-wide Grid infrastructure of computing centers to provide sufficient computational, storage and network resources to fully exploit the scientific potential of the four major experiments operating on LHC data: Alice, ATLAS, CMS and LHCb. These experiments will generate enormous amounts of data (10-15 Petabytes per year).Computing and storage services to analyze them would be implemented by a geographically distributed Data Grid.Given the variety of the storage solutions adopted by the sites collaborating in the WLCG infrastructure, it was considered important to provide an efficient and uniform Grid interface to storage and allow experiments to transparently access the data, independently of the storage implementation available at a site. This effort has given rise to the Grid Storage Management Working Group (GSM-WG) at the Open Grid Forum (OGF) [3].In what follows, we report on the experience acquired during the definition of the Storage Resource Manager (SRM) v2.2 protocol. In particular, we focus on the study performed to enhance the interface and make it suitable for use by the WLCG communities.In Section 2, we elaborate on the protocol definition process and on the collection of the requirements as described by the LHC experiments. In Section 3, we talk about version 2.2 of the SRM protocol as it is defined today a...
Storage management is one of the most important enabling technologies for large-scale scientific investigations.Having to deal with multiple heterogeneous storage and file systems is one of the major bottlenecks in managing, replicating, and accessing files in distributed environments. Storage Resource Managers (SRMs), named after their web services control protocol, provide the technology needed to manage the rapidly growing distributed data volumes, as a result of faster and larger computational facilities. SRMs are Grid storage services providing interfaces to storage resources, as well as advanced functionality such as dynamic space allocation and file management on shared storage systems. They call on transport services to bring files into their space transparently and provide effective sharing of files. SRMs are based on a common specification that emerged over time and evolved into an international collaboration. This approach of an open specification that can be used by various institutions to adapt to their own storage systems has proven to be a remarkable success -the challenge has been to provide a consistent homogeneous interface to the Grid, while allowing sites to have diverse infrastructures.In particular, supporting optional features while preserving interoperability is one of the main challenges we describe in this paper. We also describe using SRM in a large international High Energy Physics collaboration, called WLCG, to prepare to handle the large volume of data expected when the Large Hadron Collider (LHC) goes online at CERN. This intense collaboration led to refinements and additional functionality in the SRM specification, and the development of multiple interoperating implementations of SRM for various complex multicomponent storage systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.