2021
DOI: 10.1007/s00146-021-01184-2
|View full text |Cite
|
Sign up to set email alerts
|

Is explainable artificial intelligence intrinsically valuable?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(11 citation statements)
references
References 20 publications
0
11
0
Order By: Relevance
“…Studies have also shown that AI's predictive capabilities may be exaggerated (Dressel and Farid, 2018;Jung et al, 2017;Salganik et al, 2020). AI's opacity makes algorithmic systems difficult to interrogate and hold accountable (Colaner, 2021;Innerarity, 2021;Loi et al, 2020). Algorithmic systems also incent surveillance and data collection because they need large information sets for model training and analysis.…”
Section: The Functions and Risks Of Algorithmic Decision-makingmentioning
confidence: 99%
See 1 more Smart Citation
“…Studies have also shown that AI's predictive capabilities may be exaggerated (Dressel and Farid, 2018;Jung et al, 2017;Salganik et al, 2020). AI's opacity makes algorithmic systems difficult to interrogate and hold accountable (Colaner, 2021;Innerarity, 2021;Loi et al, 2020). Algorithmic systems also incent surveillance and data collection because they need large information sets for model training and analysis.…”
Section: The Functions and Risks Of Algorithmic Decision-makingmentioning
confidence: 99%
“…In a similar vein, scholars suggest that humans in the loop can fix errors and place guard rails around absurd, unethical, or inappropriate results (Henderson, 2018; Jones, 2017; Rahwan, 2018). Studies have measured the relative importance of transparency in using algorithms (König et al, 2022) and find explainability may not be intrinsically valuable (Colaner, 2021). Other scholars focus on impact statements (Katyal, 2019; Metcalf et al, 2021; Reisman et al, 2018) modeled after environmental or privacy impact assessments, or ex ante transparency as to goals and metrics (Loi et al (2020) to document and assess a system's fairness.…”
Section: Theorymentioning
confidence: 99%
“…XAI is important, like informed consent, because it provides data subjects with a sense of awareness about how their data are being used. Providing a meaningful explanation of how one’s data are being used can be humanising because it gives data subjects a greater sense of control, with respect to their information (Colaner 2021 ). XAI promises data users, ‘fairness, trust, and governability’ (Colaner 2021 ).…”
Section: Big Data and Informed Consentmentioning
confidence: 99%
“…Providing a meaningful explanation of how one’s data are being used can be humanising because it gives data subjects a greater sense of control, with respect to their information (Colaner 2021 ). XAI promises data users, ‘fairness, trust, and governability’ (Colaner 2021 ).…”
Section: Big Data and Informed Consentmentioning
confidence: 99%
“…Having that said, it can even be questioned whether explainability is intrinsically valuable at all [53]. An AI system that is explainable in human terms can be seen as mere means to an end, namely, to foster fairness, trust, accountability, or individual control over decision-making processes.…”
Section: Example 1: Explainable Ai Systemsmentioning
confidence: 99%