Aims: This study aimed to identify the symptoms associated with early-stage SARS-CoV-2 (COVID-19) infections in healthcare professionals (HCP) using both clinical and laboratory data. Methods: A total of 1,297 patients, admitted between March 18 and April 8, 2020, were stratified according to their risk of developing COVID-19 using their responses to a questionnaire designed to evaluate symptoms and risk conditions. Results: Anosmia/hyposmia (p <0.0001), fever (p<0.0001), body pain (p<0.0001), and chills (p=0.001) were all independent predictors for COVID-19, with a 72% estimated probability for detecting COVID-19 in nasopharyngeal swab samples. Leukopenia, relative monocytosis, decreased eosinophil values, CRP, and platelets were also shown to be significant independent predictors for COVID-19. Conclusions: The significant clinical features for COVID-19 were identified as anosmia, fever, chills, and body pain. Elevated CRP, leukocytes under 5,400 x 109/L, and relative monocytosis (>9%) were common among patients with a confirmed COVID-19 diagnosis. These variables may help, in the absence of RT-PCR tests, to identify possible COVID-19 infections during pandemic outbreaks.
CMS expects to manage several Pbytes of data each year, distributing them over many computing sites around the world and enabling data access at those centers for analysis. CMS has identified the distributed sites as the primary location for physics analysis to support a wide community of users, with potentially as many as 3000 users. This represents an unprecedented scale of distributed computing resources and number of users. An overview of the computing architecture, the software tools and the distributed infrastructure deployed is reported. Summaries of the experience in establishing efficient and scalable operations to prepare for CMS distributed analysis are presented, followed by the user experience in their current analysis activities.JournalofGridComputing manuscript No. (will be inserted by the editor) Abstract CMS expects to manage several Pbytes of data each year, distributing them over many computing sites around the world and enabling data access at those centers for analysis. CMS has identified the distributed sites as the primary location for physics analysis to support a wide community of users, with potentially as many as 3000 users. This represents an unprecedented scale of distributed computing resources and number of users. An overview of the computing architecture, the software tools and the distributed infrastructure deployed is reported. Summaries of the experience in establishing efficient and scalable operations to prepare for CMS distributed analysis are presented, followed by the user experience in their current analysis activities. Distributed Analysis in CMS
The CMS CERN Analysis Facility (CAF) was primarily designed to host a large variety of latency-critical workflows. These break down into alignment and calibration, detector commissioning and diagnosis, and high-interest physics analysis requiring fastturnaround. In addition to the low latency requirement on the batch farm, another mandatory condition is the efficient access to the RAW detector data stored at the CERN Tier-0 facility. The CMS CAF also foresees resources for interactive login by a large number of CMS collaborators located at CERN, as an entry point for their day-by-day analysis. These resources will run on a separate partition in order to protect the high-priority use-cases described above. While the CMS CAF represents only a modest fraction of the overall CMS resources on the WLCG GRID, an appropriately sized user-support service needs to be provided. We will describe the building, commissioning and operation of the CMS CAF during the year 2008. The facility was heavily and routinely used by almost 250 users during multiple commissioning and data challenge periods. It reached a CPU capacity of 1.4MSI2K and a disk capacity at the Peta byte scale. In particular, we will focus on the performances in terms of networking, disk access and job efficiency and extrapolate prospects for the upcoming LHC first year data taking. We will also present the experience gained and the limitations observed in operating such a large facility, in which well controlled workflows are combined with more chaotic type analysis by a large number of physicists.
Abstract. Monte Carlo production in CMS has received a major boost in performance and scale since the past CHEP06 conference. The production system has been re-engineered in order to incorporate the experience gained in running the previous system and to integrate production with the new CMS event data model, data management system and data processing framework. The system is interfaced to the two major computing Grids used by CMS, the LHC Computing Grid (LCG) and the Open Science Grid (OSG).Operational experience and integration aspects of the new CMS Monte Carlo production system is presented together with an analysis of production statistics. The new system automatically handles job submission, resource monitoring, job queuing, job distribution according to the available resources, data merging, registration of data into the data bookkeeping, data location, data transfer and placement systems. Compared to the previous production system automation, reliability and performance have been considerably improved. A more efficient use of computing resources and a better handling of the inherent Grid unreliability have resulted in an increase of production scale by about an order of magnitude, capable of running in parallel at the order of ten thousand jobs and yielding more than two million events per day.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.