2021
DOI: 10.1108/intr-05-2020-0300
|View full text |Cite
|
Sign up to set email alerts
|

Managing the tension between opposing effects of explainability of artificial intelligence: a contingency theory perspective

Abstract: PurposeResearch into the interpretability and explainability of data analytics and artificial intelligence (AI) systems is on the rise. However, most recent studies either solely promote the benefits of explainability or criticize it due to its counterproductive effects. This study addresses this polarized space and aims to identify opposing effects of the explainability of AI and the tensions between them and propose how to manage this tension to optimize AI system performance and trustworthiness.Design/metho… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 33 publications
(20 citation statements)
references
References 79 publications
(211 reference statements)
0
20
0
Order By: Relevance
“…For that reason, questions related to accountability need to be addressed, and AI models need to be explainable and their chain of reasoning leading to an outcome need to be reproducible. One problem is that instruments of explainable AI, or interpretable machine learning techniques, are often developed by computer scientists for computer scientists (Abedin, 2021;Arrieta et al, 2020); in contrast, we argue that research on explainability needs to focus more on end-users who require different, and more approachable, forms of explanations.…”
Section: Explainability and Accountability Fairness And Biasmentioning
confidence: 87%
See 2 more Smart Citations
“…For that reason, questions related to accountability need to be addressed, and AI models need to be explainable and their chain of reasoning leading to an outcome need to be reproducible. One problem is that instruments of explainable AI, or interpretable machine learning techniques, are often developed by computer scientists for computer scientists (Abedin, 2021;Arrieta et al, 2020); in contrast, we argue that research on explainability needs to focus more on end-users who require different, and more approachable, forms of explanations.…”
Section: Explainability and Accountability Fairness And Biasmentioning
confidence: 87%
“…As advances in AI algorithms and systems provide proxy agency to users through customization of tasks and decision, they become increasingly capable of exerting their own agency (Sundar, 2020), which in turn gives rise to the tension between machine agency and human agency (Abedin, 2021). There are different research streams on agency, many of which discuss agency only as a potential attribute of humans (e.g., Nevo et al, 2018).…”
Section: Ai Agency and Human Interaction With Agentic Aimentioning
confidence: 99%
See 1 more Smart Citation
“…This may lead to more accurate ML models and, consequently, to better business outcomes. Explainability might also increase user's trust in AI systems and thereby raise technology adoption (Abedin, 2021).…”
Section: Awareness Of Problemmentioning
confidence: 99%
“…The second article by Babak Abedin proposes a novel framework for managing the opposing effects of AI explainability and addresses polarized beliefs about the benefits of AI explainability and its counterproductive effects. It posits that there is no single best way to maximize AI explainability, and instead, the co-existence of enabling and constraining effects must be managed (Abedin, 2022).…”
Section: A Summary Of the Special Issuementioning
confidence: 99%