2021
DOI: 10.1016/j.artint.2021.103458
|View full text |Cite
|
Sign up to set email alerts
|

A review of possible effects of cognitive biases on interpretation of rule-based machine learning models

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
36
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 72 publications
(37 citation statements)
references
References 126 publications
1
36
0
Order By: Relevance
“…The main findings from our quantitative studies (see Table 5 for a schematic summary), confirm the literature on the biases implied by agential AI in decision support [11,22,33,74] (case 1) and also suggest that group discussion can leverage collective intelligence and lead to a better performance than superhuman AI (case 2). The two qualitative case studies, on the other hand, shed light on the need to contextualize (in the broadest sense) the AI advice, so as to support the users' creative appropriation of the technology.…”
Section: The Concepts Of Knowledge Artifact and Basupporting
confidence: 82%
See 2 more Smart Citations
“…The main findings from our quantitative studies (see Table 5 for a schematic summary), confirm the literature on the biases implied by agential AI in decision support [11,22,33,74] (case 1) and also suggest that group discussion can leverage collective intelligence and lead to a better performance than superhuman AI (case 2). The two qualitative case studies, on the other hand, shed light on the need to contextualize (in the broadest sense) the AI advice, so as to support the users' creative appropriation of the technology.…”
Section: The Concepts Of Knowledge Artifact and Basupporting
confidence: 82%
“…Nonetheless, recent research [37,59,70] has highlighted how also such methods can lead to problematic appropriation, primarily due to the emergence of biases that may affect the interaction between the human users and the AI systems and that can be reinforced by XAI methods [33], e.g. automation bias and the "white box paradox" [6].…”
Section: Background Motivations and Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Across these cases biases can creep into the design and outputs (Blodgett et al, 2020). Supervised ML can reflect the cognitive biases of the designers, hidden in the logical decision-making rules of the algorithm in producing interpretable results (Kliegr et al, 2021). Unsupervised ML can acquire stereotyped biases from textual data that reflect human culture (Caliskan et al, 2017).…”
Section: Ai and Ml: Emerging Ethical Challengesmentioning
confidence: 99%
“…It is not new for modeling experts to face questions about model acceptability and validation as we expose here, but they were trained in a different context and environment when comparing with the currently era of data scientists exposed to AI. Efforts are being made to promote greater transparency in algorithms with low levels of expert intervention [84][85][86][87][88]. Miller [89] names these efforts as "explainable artificial intelligence research" and considers that there will be an increasing need to integrate AI with other fields of knowledge, such as philosophy, cognitive psychology/science, and social psychology.…”
Section: Decision Makingmentioning
confidence: 99%