AI is one of the most debated subjects of today and there seems little common understanding concerning the differences and similarities of human intelligence and artificial intelligence. Discussions on many relevant topics, such as trustworthiness, explainability, and ethics are characterized by implicit anthropocentric and anthropomorphistic conceptions and, for instance, the pursuit of human-like intelligence as the golden standard for Artificial Intelligence. In order to provide more agreement and to substantiate possible future research objectives, this paper presents three notions on the similarities and differences between human- and artificial intelligence: 1) the fundamental constraints of human (and artificial) intelligence, 2) human intelligence as one of many possible forms of general intelligence, and 3) the high potential impact of multiple (integrated) forms of narrow-hybrid AI applications. For the time being, AI systems will have fundamentally different cognitive qualities and abilities than biological systems. For this reason, a most prominent issue is how we can use (and “collaborate” with) these systems as effectively as possible? For what tasks and under what conditions, decisions are safe to leave to AI and when is human judgment required? How can we capitalize on the specific strengths of human- and artificial intelligence? How to deploy AI systems effectively to complement and compensate for the inherent constraints of human cognition (and vice versa)? Should we pursue the development of AI “partners” with human (-level) intelligence or should we focus more at supplementing human limitations? In order to answer these questions, humans working with AI systems in the workplace or in policy making have to develop an adequate mental model of the underlying ‘psychological’ mechanisms of AI. So, in order to obtain well-functioning human-AI systems, Intelligence Awareness in humans should be addressed more vigorously. For this purpose a first framework for educational content is proposed.
No abstract
With recent technological advances, commanders request the support of artificial intelligence (AI)-enabled systems during mission planning. Future AI systems may test a wide range of courses of action (COAs) and use a simulator to test each COA’s effectiveness in a war game. The COA’s effectiveness is however dependent on the commanders’ intent. The question arises to what degree a machine can understand the commanders’ intent? Currently, the intent has to be programmed manually, costing valuable time. Therefore, we tested whether a tool can understand a freely written intent so that a commander can work with an AI system with minimal effort. The work consisted of letting a tool understand the language and grammar of the commander to find relevant information in the intent; creating a (visual) representation of the intent to the commander (back brief); and creating an intent-based computable measure of effectiveness. We proposed a novel quantitative evaluation metric for understanding the commanders’ intent and tested the results qualitatively with platoon commanders of the 11th Airmobile Brigade. They were positively surprised with the level of understanding and appreciated the validation feedback. The computable measure of effectiveness is the first step toward bridging the gap between the command intent and machine learning for military mission planning.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.