In many domains, intelligent agents must coordinate their activities in order for them to be successful both individually and collectively. Over the last ten years, research in distributed artificial intelligence has emphasized building knowledge-lean systems, where coordination emerges either from simple rules of behavior or from a deep understanding of general coordination strategies. In this paper, we contend that there is an alternative for domains in which the types and methods of coordination are well structured (even though the environment may be very unstructured and dynamic). The alternative is to build real-time, knowledge-based agents that have a broad—but shallow—understanding of how to coordinate. We demonstrate the viability of this approach by example. Specifically, we have built agents that model the coordination performed by Navy and Air Force pilots and controllers in air-to-air and air-to-ground missions within a distributed interactive simulation environment. The major contribution of the paper is an examination of the requirements and approaches for supporting knowledge-based coordination, in terms of the structure of the domain, the agents' knowledge of the domain, and the underlying AI architecture.
In recent years there has been a fair amount of research directed toward the goal of developing virtual, human-like characters for simulation environments. Much of this work has focused on creating high-fidelity graphical animations that represent realistic human forms and movement. We are approaching the same goal from a different angle, focusing on the real-time generation of autonomous, intelligent behaviors that a virtual human must use to attain its goals in a complex environment. Because the emphasis of our work is on high-level behavior, rather than visual representation, our current work is geared toward non-visual simulation environments. TacAir-Soar is a system that generates intelligent behavior for flying missions in simulated fixedwing aircraft for military training simulations. The system has been in development for over five years, and has participated in multiple military training exercises. This paper presents many of the lessons we have learned in developing an autonomous, real-time system, together with suggestions for how these lessons might apply to the development of a full real-time, autonomous, virtual human that also incorporates realistic visual representation and movement.This paper describes a system called TacAir-Soar, the purpose of which is to populate military distributed simulation environments with autonomous, computer-controlled entities that generate believable and "human-like" behavior [7,12,19]. Although TacAir-Soar provides synthetic characters in a realistic virtual environment, the emphasis of the system is not on high-resolutiongraphics or real-time graphical animation. Rather, the system interacts with other humans mostly "beyond visual range". Human participants in the virtual environment generally observe TacAir-Soar's behavior by watching the movements of a simulated radar blip, listening or talking on a simulated radio, or sometimes looking at a visual simulation of a nearby aircraft, but the humans never get close enough that it is necessary to represent graphically the physical movements of a simulated human. Rather, the system must autonomously generate high-level behavior that results in observable actions that look like they were generated by a human in a flight simulator.TacAir-Soar relies on mature intelligent systems technology, including a rule-based, hierarchical representation of goals and situation descriptions. Its success depends on the large-scale integration of a number of intelligent capabilities in a complex domain. It does not just model a small set of tasks pertinent to military fixed-wing missions; it generates appropriate behavior for every such mission routinely used by the the US Navy, Air Force, and Marines; the UK Royal Air Force; and "opponent forces" in full-scale exercises.We developed TacAir-Soar to answer three primary challenges: to generate intelligent behavior in a realistic and complex domain, to generate the behavior in real time, and to be autonomous enough to integrate naturally into the existing training patterns of the military...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.