Intelligence is a central feature of human beings’ primary and interpersonal experience. Understanding how intelligence originated and scaled during evolution is a key challenge for modern biology. Some of the most important approaches to understanding intelligence are the ongoing efforts to build new intelligences in computer science (AI) and bioengineering. However, progress has been stymied by a lack of multidisciplinary consensus on what is central about intelligence regardless of the details of its material composition or origin (evolved vs. engineered). We show that Buddhist concepts offer a unique perspective and facilitate a consilience of biology, cognitive science, and computer science toward understanding intelligence in truly diverse embodiments. In coming decades, chimeric and bioengineering technologies will produce a wide variety of novel beings that look nothing like familiar natural life forms; how shall we gauge their moral responsibility and our own moral obligations toward them, without the familiar touchstones of standard evolved forms as comparison? Such decisions cannot be based on what the agent is made of or how much design vs. natural evolution was involved in their origin. We propose that the scope of our potential relationship with, and so also our moral duty toward, any being can be considered in the light of Care—a robust, practical, and dynamic lynchpin that formalizes the concepts of goal-directedness, stress, and the scaling of intelligence; it provides a rubric that, unlike other current concepts, is likely to not only survive but thrive in the coming advances of AI and bioengineering. We review relevant concepts in basal cognition and Buddhist thought, focusing on the size of an agent’s goal space (its cognitive light cone) as an invariant that tightly links intelligence and compassion. Implications range across interpersonal psychology, regenerative medicine, and machine learning. The Bodhisattva’s vow (“for the sake of all sentient life, I shall achieve awakening”) is a practical design principle for advancing intelligence in our novel creations and in ourselves.
The relationship between humans and technology has attracted increasing attention with the advent of ever stronger models of artificial intelligence. Humans and technology are intertwined within multiple autopoietic loops of stress, care, and intelligence. This paper suggests that technology should not be seen as a mere tool serving humans’ needs, but rather as a partner in a rich relationship with humans. Our model for understanding autopoietic systems applies equally to biological, technological, and hybrid systems. Regardless of their substrates, all intelligent agents can be understood as needing to respond to a perceived mismatch between what is and what should be. We take this observation, which is evidence of intrinsic links between ontology and ethics, as the basis for proposing a stress-care-intelligence feedback loop (SCI loop for short). We note that the SCI loop provides a perspective on agency that does not require recourse to explanatorily burdensome notions of permanent and singular essences. SCI loops can be seen as individuals only by virtue of their dynamics, and are thus intrinsically integrative and transformational. We begin by considering the transition from poiesis to autopoiesis in Heidegger and the subsequent enactivist tradition. We then formulate and explain the SCI loop, and examine its implications in the light of Levin’s cognitive light cone in biology, as well as the Einstein-Minkowski light cone from special and general relativity in physics. In an acknowledgement of Maturana’s and Varela’s project, our findings are considered against the backdrop of a classic Buddhist model for the cultivation of intelligence, known as the bodhisattva. We conclude by noting that SCI loops of human and technological agency can be seen as mutually integrative by noticing the stress-transfers between them. The loop framework thus acknowledges encounters and interactions between humans and technology in a way that does not relegate one to the subservience of the other (neither in ontological nor in ethical terms), suggesting instead integration and mutual respect as the default for their engagements. Moreover, an acknowledgement of diverse, multiscale embodiments of intelligence suggests an expansive model of ethics not bound by artificial, limited criteria based on privileged composition or history of an agent. The implications for our journey into the future appear numerous.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.