This paper reports the design and implementation of a secure, wide area network, distributed filesystem by the ExTENCI project (Extending Science Through Enhanced National Cyberinfrastructure) based on lustre. The filesystem is used for remote access to analysis data from the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC), and from the Lattice Quantum ChromoDynamics (LQCD) project. Security is provided for by kerberos and reinforced with additional finegrained control using lustre ACLs and quotas. We show the impact of using kerberized lustre on the IO rates of CMS and LQCD applications on client nodes, both real and virtual. Preconfigured images of lustre virtual clients containing the complete software stack ease the difficulty of managing these systems.
In this paper, we describe our current implementation of kerberized Lustre 2.0 over the WAN with partners from the Teragrid (SDSC), the Naval Research Lab, and the Open Science Grid (University of Florida). After formulating several single kerberos realms, we enable the distributed OSTs over the WAN, create local OST pools, and perform kerberized data transfers between local and remote sites. To expand the accessibility to the lustre filesystem, we also include our efforts towards crossrealm authentication and integration of Lustre 2.0 with the kerberos-enabled NFS4.
This paper reports the design and implementation of a secure, wide area network, distributed filesystem by the ExTENCI project, based on the Lustre filesystem. The system is used for remote access to analysis data from the CMS experiment at the Large Hadron Collider, and from the Lattice Quantum ChromoDynamics (LQCD) project. Security is provided by Kerberos authentication and authorization with additional fine grained control based on Lustre ACLs (Access Control List) and quotas. We investigate the impact of using various Kerberos security flavors on the I/O rates of CMS applications on client nodes reading and writing data to the Lustre filesystem, and on LQCD benchmarks. The clients can be real or virtual nodes. We are investigating additional options for user authentication based on user certificates. We compare the Lustre performance to those obtained with other distributed storage technologies.
SUMMARYHigh-speed network and Grid computing have been actively investigated, and their capabilities are being demonstrated. However, their application to high-end scientific computing and modeling is still to be explored. In this paper we discuss the related issues and present our prototype work on applying XCAT3 framework technology to geomagnetic data assimilation development with distributed computers, connected through an up to 10 Gigabit Ethernet network.
Abstract.We have developed remote data access for large volumes of data over the Wide Area Network based on the Lustre filesystem and Kerberos authentication for security. In this paper we explore a prototype for two-step data access from worker nodes at Florida Tier3 centers, located behind a firewall and using a private network, to data hosted on the Lustre filesystem at the University of Florida CMS Tier2 center. At the Tier3 center we use a client which mounts securely the Lustre filesystem and hosts an XrootD server. The worker nodes access the data from the Tier3 client using POSIX compliant tools via the XrootD-fs filesystem. We perform scalability tests with up to 200 jobs running in parallel on the Tier3 worker nodes. IntroductionThe LHC computing community is exploring alternatives to the traditional strategy of deploying storage and processing resources at the same facility. In this model inefficiencies arise when jobs wait for computing resources to free up at sites with particularly popular data sets or when the data sets are duplicated across many sites that all want to access these interesting data. In addition, facilities with modest computing resources would need to deploy sizeable disk or tape storage systems that greatly add to the complexity of the facility, making it more difficult to maintain and operate. This is especially problematic for small Tier3 centers with limited access to suitably trained manpower. Recently efforts within the CMS [1] and Atlas [2] experiments are underway to explore federated data stores which feature remote data access currently through XrootD services running on the grid. A computer can access data by simply specifying a globally defined logical filename which is then translated by the infrastructure, through a series of XrootD services, to find the location of actual files which are then served to the computer from anywhere at any time. So far, these efforts focus on using these federated data stores as a failover mechanism to access data already stored locally.In this paper we measure the I/O performance of typical HEP work flows accessing data remotely via the Lustre cluster filesystem (FS) [3] distributed on a Wide Area Network (WAN). We utilized a portion of the ExTENCI [4] testbed between the CMS Tier2 center located in Gainesville, at the University of Florida (UF) and the Tier3 center at Florida International University (FIU) located in Miami. The testbed features clients that are separated by 560 km (14 ms Round Trip Time "RTT") from the storage devices. The storage is mounted over a 10 Gbps network with Kerberos [5] authentication between clients and server. We also report on the performance impact of using XrootD services to bridge the remote Lustre FS mounted to WorkerNodes (WN) when the WNs are connected only via a private VLAN.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.