AI is one of the most debated subjects of today and there seems little common understanding concerning the differences and similarities of human intelligence and artificial intelligence. Discussions on many relevant topics, such as trustworthiness, explainability, and ethics are characterized by implicit anthropocentric and anthropomorphistic conceptions and, for instance, the pursuit of human-like intelligence as the golden standard for Artificial Intelligence. In order to provide more agreement and to substantiate possible future research objectives, this paper presents three notions on the similarities and differences between human- and artificial intelligence: 1) the fundamental constraints of human (and artificial) intelligence, 2) human intelligence as one of many possible forms of general intelligence, and 3) the high potential impact of multiple (integrated) forms of narrow-hybrid AI applications. For the time being, AI systems will have fundamentally different cognitive qualities and abilities than biological systems. For this reason, a most prominent issue is how we can use (and “collaborate” with) these systems as effectively as possible? For what tasks and under what conditions, decisions are safe to leave to AI and when is human judgment required? How can we capitalize on the specific strengths of human- and artificial intelligence? How to deploy AI systems effectively to complement and compensate for the inherent constraints of human cognition (and vice versa)? Should we pursue the development of AI “partners” with human (-level) intelligence or should we focus more at supplementing human limitations? In order to answer these questions, humans working with AI systems in the workplace or in policy making have to develop an adequate mental model of the underlying ‘psychological’ mechanisms of AI. So, in order to obtain well-functioning human-AI systems, Intelligence Awareness in humans should be addressed more vigorously. For this purpose a first framework for educational content is proposed.
A main threat to objective information processing in crime investigation teams is the tendency to focus on one particular interpretation only. To prevent such tunnel vision or 'groupthink', an investigation team can call in a crime analyst, and ask him or her to give a fresh and independent account of the evidence at hand. However, before they examine the case, crime analysts are often already aware of the scenario currently favoured by the team. In our experiment, we investigated whether such prior knowledge can jeopardise the independence of the analyst's advice. Thirty-eight professional crime analysts were asked to generate causal scenarios for two different cases and to indicate how the team should continue their investigation. Before beginning their analysis, half of the crime analysts received a realistic prior interpretation, such as might have been constructed by an investigation team. The results show that when given a prior interpretation, both experienced and inexperienced analysts considered the scenario suggested therein as more likely, and made recommendations for further investigation accordingly. We explain these findings by suggesting that analysts temporarily adopted the perspective of the investigation team, and that such temporary commitment by itself increased confidence in the hypothesis at hand (Koehler, 1991). This research supports previous research on the impact of prior theory on judgement, and extends it to an important real world domain where mistakes can have serious consequences. We recommend that in cases where the crime analyst is asked to give an objective assessment, he or she should not be informed about the interpretation of the investigation team until after the analysis has been conducted.
Building up situation understanding is one of the most difficult tasks in the beginning stages of largescale accidents. As ambiguous information about the events becomes available, decision-makers are often tempted to quickly develop a particular story to explain the observed events. As the accident evolves, decision-makers can fail to revise their initial assessments despite contradicting information. Our approach is to reduce fixation errors and confirmation bias by providing critical thinking support. In a laboratory experiment with 60 participants, we compared the effect on decision making of a critical thinking tool, which requires the explication of evidence-conclusion relations in situation assessment, with two control conditions. Participants acted as crisis managers determining the likely cause of accidents. The results show a positive impact of the tool on both the decision-making process and decision making effectiveness. Participants did, however, take more time to arrive at a conclusion using the tool.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.