The original "Seven Motifs" set forth a roadmap of essential methods for the field of scientific computing, where a motif is an algorithmic method that captures a pattern of computation and data movement. 1 We present the Nine Motifs of Simulation Intelligence, a roadmap for the development and integration of the essential algorithms necessary for a merger of scientific computing, scientific simulation, and artificial intelligence. We call this merger simulation intelligence (SI), for short. We argue the motifs of simulation intelligence are interconnected and interdependent, much like the components within the layers of an operating system. Using this metaphor, we explore the nature of each layer of the simulation intelligence "operating system" stack (SI-stack) and the motifs therein:1. Multi-physics and multi-scale modeling 2. Surrogate modeling and emulation 3. Simulation-based inference 4. Causal modeling and inference 5. Agent-based modeling 6. Probabilistic programming 7. Differentiable programming 8. Open-ended optimization Machine programmingWe believe coordinated efforts between motifs offers immense opportunity to accelerate scientific discovery, from solving inverse problems in synthetic biology and climate science, to directing nuclear energy experiments and predicting emergent behavior in socioeconomic settings. We elaborate on each layer of the SI-stack, detailing the state-of-art methods, presenting examples to highlight challenges and opportunities, and advocating for specific ways to advance the motifs and the synergies from their combinations. Advancing and integrating these technologies can enable a robust and efficient hypothesis-simulation-analysis type of scientific method, which we introduce with several use-cases for human-machine teaming and automated science.
Turning principles into practice is one of the most pressing challenges of artificial intelligence (AI) governance. In this article, we reflect on a novel governance initiative by one of the world's largest AI conferences. In 2020, the Conference on Neural Information Processing Systems (NeurIPS) introduced a requirement for submitting authors to include a statement on the broader societal impacts of their research. Drawing insights from similar governance initiatives, including institutional review boards (IRBs) and impact requirements for funding applications, we investigate the risks, challenges and potential benefits of such an initiative. Among the challenges, we list a lack of recognised best practice and procedural transparency, researcher opportunity costs, institutional and social pressures, cognitive biases, and the inherently difficult nature of the task. The potential benefits, on the other hand, include improved anticipation and identification of impacts, better communication with policy and governance experts, and a general strengthening of the norms around responsible research. To maximise the chance of success, we recommend measures to increase transparency, improve guidance, create incentives to engage earnestly with the process, and facilitate public deliberation on the requirement's merits and future. Perhaps the most important contribution from this analysis are the insights we can gain regarding effective community-based governance and the role and responsibility of the AI research community more broadly.
Current AI policy recommendations differ on what the risks to human autonomy are. To systematically address risks to autonomy, we need to confront the complexity of the concept itself and adapt governance solutions accordingly.It is hard to overstate the important role autonomy plays for our moral and political institutions. A cornerstone of human dignity and a prerequisite of liberal democracy, autonomy is often considered a fundamental human value [1-4]. Progress in artificial intelligence (AI) development opens up new opportunities for supporting and fostering autonomy, but it simultaneously poses significant risks. Recent incidents of AI-facilitated deception, manipulation, or coercion suggest that AI technologies could seriously interfere with human autonomy on a large scale. Cambridge Analytica's attempt to manipulate voters is just one example [5]. Facebook's "emotional contagion" experiment, in which users were swayed towards adopting certain emotional states, yet another one [6]. Consequently, human autonomy has become a central theme across guidelines and principles on the responsible development of AI. The European Commission's High-Level Expert Group (HLEG) lists 'respect for autonomy' as the first of its four key ethical principles in their Guidelines on Trustworthy AI [7]. Several other policy documents, including the Association for Computing Machinery's Code of Ethics [8], The Montreal Declaration for Responsible Development of Artificial Intelligence [9], and the European Commission's White Paper on Artificial Intelligence [10], equally emphasise the need to protect and respect autonomy whereas the Organisation for Economic Co-operation and Development (OECD) lists autonomy as one of their human-centered values [11]. Despite this frequent call for the protection of autonomy, there remains substantial ambiguity within these documents as to (a) what exactly is meant by the term 'autonomy', as well
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.