Asteroseismology with the Kepler space telescope is providing not only an improved characterization of exoplanets and their host stars, but also a new window on stellar structure and evolution for the large sample of solar-type stars in the field. We perform a uniform analysis of 22 of the brightest asteroseismic targets with the highest signal-to-noise ratio observed for 1 month each during the first year of the mission, and we quantify the precision and relative accuracy of asteroseismic determinations of the stellar radius, mass, and age that are possible using various methods. We present the properties of each star in the sample derived from an automated analysis of the individual oscillation frequencies and other observational constraints using the Asteroseismic Modeling Portal (AMP), and we compare them to the results of model-grid-based methods that fit the global oscillation properties. We find that fitting the individual frequencies typically yields asteroseismic radii and masses to ∼1% precision, and ages to ∼2.5% precision (respectively 2, 5, and 8 times better than fitting the global oscillation properties). The absolute level of agreement between the results from different approaches is also encouraging, with model-grid-based methods yielding slightly smaller estimates of the radius and mass and slightly older values for the stellar age relative to AMP, which computes a large number of dedicated models for each star. The sample of targets for which this type of analysis is possible will grow as longer data sets are obtained during the remainder of the mission. 15 Remaining affiliations removed due to arXiv error 16 Kepler data are collected by quarters that lasted three months except for the first quarter, which lasted one month (referred as Q1). One month of the other quarters are denoted as Q2.1 for example to refer to the first month of the second quarter.
This paper examines the economics of cloud computing charging from the perspective of a supercomputing resource provider offering its own resources. To evaluate the competitiveness of our computing center with cloud computing resources, we develop a comprehensive system utilization charging model similar to that used by Amazon EC2 and apply the model to our current resources and planned procurements. For our current resource, we find that charging for computational time may be appropriate, but that charging for data traffic between the supercomputer and the storage/front-end systems would result in negligible additional revenue. Similarly, charging for data storage capacity at currently typical commercial rates yields insufficient revenue to offset the acquisition and operation of the storage. However, when we extend the analysis to a capacity cluster scheduled for deployment in the first half of 2010 that will be made available to users through batch, Grid, and cloud interfaces, we find that the resource will be competitive with current and anticipated cloud rates.
The Asteroseismic Modeling Portal (AMP) provides a webbased interface for astronomers to run and view simulations that derive the properties of Sun-like stars from observations of their pulsation frequencies. In this paper, we describe the architecture and implementation of AMP, highlighting the lightweight design principles and tools used to produce a functional fully-custom web-based science application in less than a year. Targeted as a TeraGrid science gateway, AMP's architecture and implementation are intended to simplify its orchestration of TeraGrid computational resources. AMP's web-based interface was developed as a traditional standalone database-backed web application using the Pythonbased Django web development framework, allowing us to leverage the Django framework's capabilities while cleanly separating the user interface development from the grid interface development. We have found this combination of tools flexible and effective for rapid gateway development and deployment.
Abstract-While much high-performance computing is performed using massively parallel MPI applications, many workflows execute jobs with a mix of processor counts. At the extreme end of the scale, some workloads consist of large quantities of single-processor jobs. These types of workflows lead to inefficient usage of massively parallel architectures such as the IBM Blue Gene/L (BG/L) because of allocation constraints forced by its unique system design. Recently, IBM introduced the ability to schedule individual processors on BG/L -a feature named High Throughput Computing (HTC) -creating an opportunity to exploit the system's power efficiency for other classes of computing.In this paper, we present a Grid-enabled interface supporting HTC on BG/L. This interface accepts single-processor tasks using Globus GRAM, aggregates HTC tasks into BG/L partitions, and requests partition execution using the underlying system scheduler. By separating HTC task aggregation from scheduling, we provide the ability for workflows constructed using standard Grid middleware to run both parallel and serial jobs on the BG/L. We examine the startup latency and performance of running large quantities of HTC jobs. Finally, we deploy Daymet, a component of a coupled climate model, on a BG/L system using our HTC interface.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.