2021
DOI: 10.1007/978-3-030-85613-7_22
|View full text |Cite
|
Sign up to set email alerts
|

ObjectivAIze: Measuring Performance and Biases in Augmented Business Decision Systems

Abstract: Business process management organizes flows of information and decisions in large organizations. These systems now integrate algorithmic decision aids leveraging machine learning: each time a stakeholder needs to make a decision, such as a purchase, a quote, or hiring someone, the software leverages the inputs and outcomes of similar past decisions to provide guidance, as a recommendation. If the confidence is high, the process may be automated. Otherwise, it may still help provide consistency in the decisions… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
2
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 46 publications
0
2
0
Order By: Relevance
“…This means that humans are not only willing to rely on AI because they are "cognitive misers", that take mental shortcuts when making decisions, but also because they perceive artificial intelligence to be trustworthy 65,78 . It has been suggested that trust could induce compliance with AI advice due to an authority effect 79 . In Experiments 1 and 2, we detected that participants who perceived the AI of the experiment as more helpful and accurate, and trusted more in the usefulness of artificial intelligence in healthcare in general, were those who followed the AI recommendations more often, and committed more errors in the classifications task as revealed by the positive and significative correlations observed between the mean number of 40/60 misclassifications in the task and the participants' answers to the post-experimental questions.…”
Section: Discussionmentioning
confidence: 99%
“…This means that humans are not only willing to rely on AI because they are "cognitive misers", that take mental shortcuts when making decisions, but also because they perceive artificial intelligence to be trustworthy 65,78 . It has been suggested that trust could induce compliance with AI advice due to an authority effect 79 . In Experiments 1 and 2, we detected that participants who perceived the AI of the experiment as more helpful and accurate, and trusted more in the usefulness of artificial intelligence in healthcare in general, were those who followed the AI recommendations more often, and committed more errors in the classifications task as revealed by the positive and significative correlations observed between the mean number of 40/60 misclassifications in the task and the participants' answers to the post-experimental questions.…”
Section: Discussionmentioning
confidence: 99%
“…This means that humans are not only willing to rely on AI because they are "cognitive misers", that take mental shortcuts when making decisions, but also because they perceive artificial intelligence to be trustworthy (Araujo et al, 2020;Kool & Botvinick, 2018). It has been suggested that trust could induce compliance with AI advice due to an authority effect (Baudel et al, 2020). In Experiments 1 and 2, we detected that participants who perceived the AI of the experiment as more helpful and accurate, and trusted more in the usefulness of artificial intelligence in healthcare in general, were those who followed the AI recommendations more often, and committed more errors in the classifications task as revealed by the positive and significative correlations observed between the mean number of 40/60 misclassifications in the task and the participants' answers to the post-experimental questions.…”
Section: Discussionmentioning
confidence: 99%
“…However, the confidence score alone produced cooperative performance comparable to their best explanations. Baudel et al [8] also achieved such cooperative performance on a prediction task based on the Titanic dataset, by using only the AI's recommendation. However, when using an AI with higher accuracy, the cooperative performance disappeared, suggesting that there is always a performance threshold where human-in-the-loop decision making becomes detrimental in terms of efficiency.…”
Section: B Related Workmentioning
confidence: 98%
“…In this study, explainability is considered in a post-hoc scenario where an AI decision is viewed as a recommendation in a decision task including a human in the loop. Inspired by empirical approaches for XAI evaluations [6]- [8], we designed an experiment to perform simultaneous evaluations of explanations with varied forms at once with few users, from both task-performance and technology acceptance perspectives. We developed a protocol and an interactive environment, meant to evaluate human-AI interaction on a binary classification task, and evaluate it on a winner prediction task on League of Legends (LoL) games data.…”
Section: A Research Contextmentioning
confidence: 99%