Results are presented from searches for the standard model Higgs boson in proton-proton collisions at root s = 7 and 8 TeV in the Compact Muon Solenoid experiment at the LHC, using data samples corresponding to integrated luminosities of up to 5.1 fb(-1) at 7 TeV and 5.3 fb(-1) at 8 TeV. The search is performed in five decay modes: gamma gamma, ZZ, W+W-, tau(+)tau(-), and b (b) over bar. An excess of events is observed above the expected background, with a local significance of 5.0 standard deviations, at a mass near 125 GeV, signalling the production of a new particle. The expected significance for a standard model Higgs boson of that mass is 5.8 standard deviations. The excess is most significant in the two decay modes with the best mass resolution, gamma gamma and ZZ; a fit to these signals gives a mass of 125.3 +/- 0.4(stat.) +/- 0.5(syst.) GeV. The decay to two photons indicates that the new particle is a boson with spin different from one. (C) 2012 CERN. Published by Elsevier B.V. All rights reserved
The Higgs boson was postulated nearly five decades ago within the framework of the standard model of particle physics and has been the subject of numerous searches at accelerators around the world. Its discovery would verify the existence of a complex scalar field thought to give mass to three of the carriers of the electroweak force—the W+, W–, and Z0 bosons—as well as to the fundamental quarks and leptons. The CMS Collaboration has observed, with a statistical significance of five standard deviations, a new particle produced in proton-proton collisions at the Large Hadron Collider at CERN. The evidence is strongest in the diphoton and four-lepton (electrons and/or muons) final states, which provide the best mass resolution in the CMS detector. The probability of the observed signal being due to a random fluctuation of the background is about 1 in 3 × 106. The new particle is a boson with spin not equal to 1 and has a mass of about 125 giga–electron volts. Although its measured properties are, within the uncertainties of the present data, consistent with those expected of the Higgs boson, more data are needed to elucidate the precise nature of the new particle
OverviewThe use of highly distributed systems for high-throughput computing has been very successful for the broad scientific computing community. Programs such as the Open Science Grid [1] allow scientists to gain efficiency by utilizing available cycles across different domains. Traditionally, these programs have aggregated resources owned at different institutes, adding the important functionality to elastically contract and expand resources to match instantaneous demand as desired. An appealing scenario is to extend the reach of extensible resources to the rental market of commercial clouds.A prototypical example of such a scientific domain is the field of High Energy Physics (HEP), which is strongly dependent on high-throughput computing. Every stage of a modern HEP experiment requires massive resources (compute, storage, networking). Detector and simulationgenerated data have to be processed and associated with auxiliary detector and beam information to generate physics objects, which are then stored and made available to the experimenters for analysis. In the current computing paradigm, the facilities that provide the necessary resources utilize distributed high-throughput computing, with global workflow, scheduling, and data management, enabled by high-performance networks. The computing resources in these facilities are either owned by an experiment and operated by laboratories and university partners (e.g. Energy Frontier experiments at the Large Hadron Collider (LHC) such as CMS, ATLAS) or deployed for a specific program, owned and operated by the host laboratory (e.g. Intensity Frontier experiments at Fermilab such as NOvA, MicroBooNE).The HEP investment to deploy and operate these resources is significant: for example, at the time of this work, Abstract Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized both local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. In addition, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.