This work aims to present our current best physical understanding of common-envelope evolution (CEE). We highlight areas of consensus and disagree- ment, and stress ideas which should point the way forward for progress in this important but long-standing and largely unconquered problem. Unusually for CEE-related work, we mostly try to avoid relying on results from population synthesis or observations, in order to avoid potentially being misled by previous misunderstandings. As far as possible we debate all the relevant issues starting from physics alone, all the way from the evolution of the binary system immediately before CEE begins to the processes which might occur just after the ejection of the envelope. In particular, we include extensive discussion about the energy sources and sinks operating in CEE, and hence examine the foundations of the standard energy formalism. Special attention is also given to comparing the results of hydrodynamic simulations from different groups and to discussing the potential effect of initial conditions on the differences in the outcomes. We compare current numerical techniques for the problem of CEE and also whether more appropriate tools could and should be produced (including new formulations of computational hydrodynamics, and attempts to include 3D processes within 1D codes). Finally we explore new ways to link CEE with observations. We compare previous simulations of CEE to the recent outburst from V1309 Sco, and discuss to what extent post-common-envelope binaries and nebulae can provide information, e.g. from binary eccentricities, which is not currently being fully exploited.
In the cores of young dense star clusters, repeated stellar collisions involving the same object can occur. It has been suggested that this leads to the formation of an intermediate-mass black hole. To verify this scenario we compute the detailed evolution of the merger remnant of three sequences, then follow the evolution until the onset of carbon burning, and estimate the final remnant mass to determine the ultimate fate of a runaway merger sequence. We use a detailed stellar evolution code to follow the evolution of the collision product. At each collision we mix the two colliding stars, accounting for the mass loss during the collision. During the stellar evolution we apply mass-loss rates from the literature, as appropriate for the evolutionary stage of the merger remnant. We computed models for high (Z = 0.02) and low (Z = 0.001) metallicity to quantify metallicity effects. We find that the merger remnant becomes a Wolf-Rayet star before the end of core hydrogen burning. Mass loss from stellar winds dominates the mass increase due to repeated mergers for all three merger sequences that we consider. In none of our high-metallicity models an intermediate-mass black hole is formed, instead our models have a mass of 10-14 M at the onset of carbon burning. For low metallicity the final remnant is more massive and may explode as a pair-creation supernova. We find that our metal-rich models become inflated as a result of developing an extended low-density envelope. This may increase the probability of further collisions, but self-consistent N-body calculations with detailed evolution of runaway mergers are required to verify this.
This paper presents applications of weighted meshless scheme for conservation laws to the Euler equations and the equations of ideal magnetohydrodynamics. The divergence constraint of the latter is maintained to the truncation error by a new meshless divergence cleaning procedure. The physics of the interaction between the particles is described by an one-dimensional Riemann problem in a moving frame. As a result, necessary diffusion which is required to treat dissipative processes is added automatically. As a result, our scheme has no free parameters that controls the physics of inter-particle interaction, with the exception of the number of the interacting neighbours which control the resolution and accuracy. The resulting equations have the form similar to SPH equations, and therefore existing SPH codes can be used to implement the weighed particle scheme. The scheme is validated in several hydrodynamic and MHD test cases. In particular, we demonstrate for the first time the ability of a meshless MHD scheme to model magneto-rotational instability in accretion disks.
We present Sapporo, a library for performing high-precision gravitational N -body simulations on NVIDIA Graphical Processing Units (GPUs). Our library mimics the GRAPE-6 library, and N -body codes currently running on GRAPE-6 can switch to Sapporo by a simple relinking of the library. The precision of our library is comparable to that of GRAPE-6, even though internally the GPU hardware is limited to single precision arithmetics. This limitation is effectively overcome by emulating double precision for calculating the distance between particles. The performance loss of this operation is small ( < ∼ 20%) compared to the advantage of being able to run at high precision. We tested the library using several GRAPE-6-enabled N-body codes, in particular with Starlab and phiGRAPE. We measured peak performance of 800 Gflop/s for running with 10 6 particles on a PC with four commercial G92 architecture GPUs (two GeForce 9800GX2). As a production test, we simulated a 32k Plummer model with equal mass stars well beyond core collapse. The simulation took 41 days, during which the mean performance was 113 Gflop/s. The GPU did not show any problems from running in a production environment for such an extended period of time. IntroductionGraphical processing units (GPUs) are quickly becoming main stream in computational science. The introduction of Compute Unified Device Architecture (CUDA, Fernando, 2004), in which GPUs can be programmed effectively, has generated a paradigm shift in scientific computing (Hoekstra et al., 2007). Modern GPUs are greener in terms of CO 2 production, have a smaller footprint, are cheaper, and as easy to program as traditional parallel computers. In addition, you will not have a waiting queue when running large simulations on your local GPU-equipped workstation.Newtonian stellar dynamics is traditionally on the forefront of high-performance computing. The first dedicated Newtonian solver (Applegate et al., 1986) was used to study the stability of the solar system (Sussman and Wisdom, 1992). And soon even faster specialized hardware was introduced by the inauguration of the GRAPE family of computers, which have an impressive history of breaking computing speed records (Makino and Taiji, 1998).Nowadays, the GPUs are being used in various scientific areas, such as molecular dynamics (Anderson et al., 2008;van Meel et al., 2008), solving Kepler's equations (Ford, 2009) and Newtonian N -body simulations. Solving the Newtonian N -body problem with GPUs started in the early 2000s by adopting a shared time step algorithm with a 2nd order integrator (Nyland et al., 2004). A few years later this algorithm was improved to include individual time steps and a higher order integrator , in a code that was written in the device specific language Cg (Fernando and Kilgard, 2003). The performance was still relatively low compared to later implementations in CUDA via the Cunbody package (Hamada and Iitaka, 2007), Kirin library , and the Yebisu N -body code (Nitadori and Makino, 2008;Nitadori, 2009). The main p...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.