Results are presented from searches for the standard model Higgs boson in proton-proton collisions at root s = 7 and 8 TeV in the Compact Muon Solenoid experiment at the LHC, using data samples corresponding to integrated luminosities of up to 5.1 fb(-1) at 7 TeV and 5.3 fb(-1) at 8 TeV. The search is performed in five decay modes: gamma gamma, ZZ, W+W-, tau(+)tau(-), and b (b) over bar. An excess of events is observed above the expected background, with a local significance of 5.0 standard deviations, at a mass near 125 GeV, signalling the production of a new particle. The expected significance for a standard model Higgs boson of that mass is 5.8 standard deviations. The excess is most significant in the two decay modes with the best mass resolution, gamma gamma and ZZ; a fit to these signals gives a mass of 125.3 +/- 0.4(stat.) +/- 0.5(syst.) GeV. The decay to two photons indicates that the new particle is a boson with spin different from one. (C) 2012 CERN. Published by Elsevier B.V. All rights reserved
Recent results of the searches for Supersymmetry in final states with one or two leptons at CMS are presented. Many Supersymmetry scenarios, including the Constrained Minimal Supersymmetric extension of the Standard Model (CMSSM), predict a substantial amount of events containing leptons, while the largest fraction of Standard Model background events -which are QCD interactions -gets strongly reduced by requiring isolated leptons. The analyzed data was taken in 2011 and corresponds to an integrated luminosity of approximately L = 1 fb −1 . The center-of-mass energy of the pp collisions was √ s = 7 TeV.
In Grids scheduling decisions are often made on the basis of jobs being either data or computation intensive: in data intensive situations jobs may be pushed to the data and in computation intensive situations data may be pulled to the jobs. This kind of scheduling, in which there is no consideration of network characteristics, can lead to performance degradation in a Grid environment and may result in large processing queues and job execution delays due to site overloads. In this paper we describe a Data Intensive and Network Aware (DIANA) meta-scheduling approach, which takes into account data, processing power and network characteristics when making scheduling decisions across multiple sites. Through a practical implementation on a Grid testbed, we demonstrate that queue and execution times of data-intensive jobs can be significantly improved when we introduce our proposed DIANA scheduler. The basic scheduling decisions are dictated by a weighting factor for each potential target location which is a calculated function of network characteristics, processing cycles and data location and size. The job scheduler provides a global ranking of the computing resources and then selects an optimal one on the basis of this overall access and execution cost. The DIANA approach considers the Grid as a combination of active network elements and takes network characteristics as a first class criterion in the scheduling decision matrix along with computations and data. The scheduler can then make informed decisions by taking into account the changing state of the network, locality and size of the data and the pool of available processing cycles.Key words meta scheduling . network awareness . peer-to-peer architectures . data intensive . scheduling algorithm BackgroundResource management [1, 2] is a central task in any Grid system. Resources may include "traditional" resources such as compute cycles, network band- J Grid Computing (2007) 5:43-64
Evolution of brain imaging in neurodegenerative diseasesBrain imaging was regarded as an elective examination in patients with cognitive decline 15 years ago [1]. The practice parameters for diagnosis and evaluation of dementia defined by the American Academy of Neurology regarded computed tomography (CT) and magnetic resonance (MR) as 'optional' assessments [2,3]. Over time, imaging in dementia has moved from a negative, exclusionary role to one that added positive diagnostic and prognostic information. In the late 1990s, the traditional exclusionary approach was abandoned in favor of the inclusive approach [4,5]. Rapid advances in neuroimaging technologies such as PET, single photon emission CT, MR spectroscopy, diffusion tensor imaging and functional MRI have offered new vision into the pathophysiology of Alzheimer's desease (AD) [6] and, consequently, increasingly new powerful data-analysis methods have been developed [7].Since the beginning of the 21st Century, the development of innovative techniques for region-of-interest-based volumetry, automated voxel-based morphometry, cortical thickness measurement, basal forebrain volumetry and multivariate statistics have emerged [7][8][9] and those measurements most feasible and accurate have started to be used in clinical settings. The availability to the neuroimaging community of large prospective image data repositories has led to the development of web-based interfaces to access data and online image analysis tools to assess longitudinal brain changes [10][11][12][13].With the development of novel analysis techniques, the computational complexity of neuroimaging analysis has also increased signifi cantly. Higher spatial resolution images and longer time scans are being acquired so that more voxels will need to be processed for each acquisition. The same applies to the computational resources required by algorithms, since these have become increasingly central processing Neuroscience is increasingly making use of statistical and mathematical tools to extract information from images of biological tissues. Computational neuroimaging tools require substantial computational resources and the increasing availability of large image datasets will further enhance this need. Many efforts have been directed towards creating brain image repositories including the recent US Alzheimer Disease Neuroimaging Initiative. Multisite-distributed computing infrastructures have been launched with the goal of fostering shared resources and facilitating data analysis in the study of neurodegenerative diseases. Currently, some Grid-and non-Grid-based projects are aiming to establish distributed e-infrastructures, interconnecting compatible imaging datasets and to supply neuroscientists with the most advanced information and communication technologies tools to study markers of Alzheimer's and other brain diseases, but they have so far failed to make a difference in the larger neuroscience community. NeuGRID is an Europeon comission-funded effort arising from the needs of the Alzheimer's...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.