2020
DOI: 10.1007/s00146-020-00960-w
|View full text |Cite
|
Sign up to set email alerts
|

Artificial intelligence, transparency, and public decision-making

Abstract: The increasing use of Artificial Intelligence (AI) for making decisions in public affairs has sparked a lively debate on the benefits and potential harms of self-learning technologies, ranging from the hopes of fully informed and objectively taken decisions to fear for the destruction of mankind. To prevent the negative outcomes and to achieve accountable systems, many have argued that we need to open up the "black box" of AI decision-making and make it more transparent. Whereas this debate has primarily focus… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
51
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
4
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 156 publications
(52 citation statements)
references
References 42 publications
1
51
0
Order By: Relevance
“…The advantages of justifications over explanations (in the sense used in this paper) to enhance trust in ADS has also been analyzed by Karl de Fine Licht and Jenny de Fine Licht [32] who argue that "a limited form of transparency that focuses on providing justifications for decisions has the potential to provide sufficient grounds for perceived legitimacy in AI decision-making. "…”
Section: Related Workmentioning
confidence: 99%
“…The advantages of justifications over explanations (in the sense used in this paper) to enhance trust in ADS has also been analyzed by Karl de Fine Licht and Jenny de Fine Licht [32] who argue that "a limited form of transparency that focuses on providing justifications for decisions has the potential to provide sufficient grounds for perceived legitimacy in AI decision-making. "…”
Section: Related Workmentioning
confidence: 99%
“…AI is becoming more sophisticated as it surpasses rather than supplements our capabilities. Recent advances in AI such as Google's AlphaGo 9 and self-driving cars have come largely at the price of transparency with systems increasingly becoming "black boxes", particularly in the machine learning community [11,42]. The importance of explanations in AI and Computing in general is now a research movement gaining traction under the banner of Explainable Artificial Intelligence (XAI) (formerly known as Explanation-aware Computing), with XAI set to be key to new models of communication in AI [3].…”
Section: Explanationmentioning
confidence: 99%
“…This is especially important at this era of digitalization in agriculture (and aquaculture) and precision livestock farming [43,44] that heavily depend on models and simulations [252]. Hybrid modelling approaches will result in improved transparency and accountability for decision-making [253,254] that is crucial for animal welfare [255]. Architectural models that implement the whole integrated phenotype [256] may be used to help monitor both the physical health and subjective wellbeing of animals.…”
Section: Consequences For Behaviour and Welfarementioning
confidence: 99%