Breeding blankets are designed to ensure tritium self-sufficiency in deuterium–tritium fusion power plants. In addition to this, breeder blankets play a vital role in shielding key components of the reactor, and provide the main source of heat which will ultimately be used to generate electricity. Blanket design is critical to the success of fusion reactors and integral to the design process. Neutronic simulations of breeder blankets are regularly performed to ascertain the performance of a particular design. An iterative process of design improvements and parametric studies are required to optimize the design and meet performance targets. Within the EU DEMO program the breeding blanket design cycle is repeated for each new baseline design. One of the key steps is to create three-dimensional models suitable primarily for use in neutronics, but could be used in other computer-aided design (CAD)-based physics and engineering analyses. This article presents a novel blanket design tool which automates the process of producing heterogeneous 3D CAD-based geometries of the helium-cooled pebble bed, water-cooled lithium lead, helium-cooled lithium lead and dual-coolant lithium lead blanket types. The paper shows a method of integrating neutronics, thermal analysis and mechanical analysis with parametric CAD to facilitate the design process. The blanket design tool described in this paper provides parametric geometry for use in neutronics and engineering simulations. This paper explains the methodology of the design tool and demonstrates use of the design tool by generating all four EU blanket designs using the EU DEMO baseline. Neutronics and heat transfer simulations using the models have been carried out. The approach described has the potential to considerably speed up the design cycle and greatly facilitate the integration of multiphysics studies.
Modern epidemiological analyses to understand and combat the spread of disease depend critically on access to, and use of, data. Rapidly evolving data, such as data streams changing during a disease outbreak, are particularly challenging. Data management is further complicated by data being imprecisely identified when used. Public trust in policy decisions resulting from such analyses is easily damaged and is often low, with cynicism arising where claims of ‘following the science’ are made without accompanying evidence. Tracing the provenance of such decisions back through open software to primary data would clarify this evidence, enhancing the transparency of the decision-making process. Here, we demonstrate a Findable, Accessible, Interoperable and Reusable (FAIR) data pipeline. Although developed during the COVID-19 pandemic, it allows easy annotation of any data as they are consumed by analyses, or conversely traces the provenance of scientific outputs back through the analytical or modelling source code to primary data. Such a tool provides a mechanism for the public, and fellow scientists, to better assess scientific evidence by inspecting its provenance, while allowing scientists to support policymakers in openly justifying their decisions. We believe that such tools should be promoted for use across all areas of policy-facing research. This article is part of the theme issue ‘Technical challenges of modelling real-life epidemics and examples of overcoming these’.
The goal for CMS computing is to maximise the throughput of simulated event generation while also processing the real data events as quickly and reliably as possible. To maintain this achievement as the quantity of events increases, since the beginning of 2011 CMS computing has migrated at the Tier 1 level from its old production framework, ProdAgent, to a new one, WMAgent. The WMAgent framework offers improved processing efficiency and increased resource usage as well as a reduction in manpower. In addition to the challenges encountered during the design of the WMAgent framework, several operational issues have arisen during its commissioning. The largest operational challenges were in the usage and monitoring of resources, mainly a result of a change in the way work is allocated. Instead of work being assigned to operators, all work is centrally injected and managed in the Request Manager system and the task of the operators has changed from running individual workflows to monitoring the global workload. In this report we present how we tackled some of the operational challenges, and how we benefitted from the lessons learned in the commissioning of the WMAgent framework at the Tier 2 level in late 2011. As case studies, we will show how the WMAgent system performed during some of the large data reprocessing and Monte Carlo simulation campaigns.
We report on an ongoing collaboration between epidemiological modellers and visualization researchers by documenting and reflecting upon knowledge constructs—a series of ideas, approaches and methods taken from existing visualization research and practice—deployed and developed to support modelling of the COVID-19 pandemic. Structured independent commentary on these efforts is synthesized through iterative reflection to develop: evidence of the effectiveness and value of visualization in this context; open problems upon which the research communities may focus; guidance for future activity of this type and recommendations to safeguard the achievements and promote, advance, secure and prepare for future collaborations of this kind. In describing and comparing a series of related projects that were undertaken in unprecedented conditions, our hope is that this unique report, and its rich interactive supplementary materials, will guide the scientific community in embracing visualization in its observation, analysis and modelling of data as well as in disseminating findings. Equally we hope to encourage the visualization community to engage with impactful science in addressing its emerging data challenges. If we are successful, this showcase of activity may stimulate mutually beneficial engagement between communities with complementary expertise to address problems of significance in epidemiology and beyond. See https://ramp-vis.github.io/RAMPVIS-PhilTransA-Supplement/ . This article is part of the theme issue ‘Technical challenges of modelling real-life epidemics and examples of overcoming these’.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.