2020
DOI: 10.1177/1023263x20978649
|View full text |Cite
|
Sign up to set email alerts
|

Proceduralizing control and discretion: Human oversight in artificial intelligence policy

Abstract: This article is an examination of human oversight in EU policy for controlling algorithmic systems in automated legal decision making. Despite the shortcomings of human control over complex technical systems, human oversight is advocated as a solution against the risks of increasing reliance on algorithmic tools. For law, human oversight provides an attractive, easily implementable and observable procedural safeguard. However, without awareness of its inherent limitations, human oversight is in danger of becom… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 29 publications
(7 citation statements)
references
References 7 publications
0
7
0
Order By: Relevance
“…In addition, it requires an understanding and improvement of how humans interact with and perceive the decisions or other outputs produced by AI systems (Bader and Kaiser, 2019 ; Araujo et al, 2020 ; Meissner and Keding, 2021 ). Human oversight of AI (Wagner, 2019 ; Koulu, 2020 ) is an area where further research is needed to understand how human decision-makers may influence or be influenced by AI decisions and to design appropriate and feasible monitoring and oversight mechanisms necessary to improve trust toward AI systems and minimize risks and harms.…”
Section: Recommendationsmentioning
confidence: 99%
“…In addition, it requires an understanding and improvement of how humans interact with and perceive the decisions or other outputs produced by AI systems (Bader and Kaiser, 2019 ; Araujo et al, 2020 ; Meissner and Keding, 2021 ). Human oversight of AI (Wagner, 2019 ; Koulu, 2020 ) is an area where further research is needed to understand how human decision-makers may influence or be influenced by AI decisions and to design appropriate and feasible monitoring and oversight mechanisms necessary to improve trust toward AI systems and minimize risks and harms.…”
Section: Recommendationsmentioning
confidence: 99%
“…While the proposal put forward by the EU authorities focuses on a (legitimate and timely) concern for the concrete impacts of AI on fundamental rights, it appears to 71 disenfranchise representative political processes from the governance of AI. However effective it may be to set up dedicated agencies to monitor the impacts of AI on fundamental rights, proceduralisation of AI is not a guarantee per se, 75 and the current European framework disregards the political value of the choice of authorising the use of AI in decision-making processes.…”
Section: Controlling and Selecting Civic Issues That Are Assigned To ...mentioning
confidence: 99%
“…Answers to this question may suggest that while the technology might be developing quickly, it is not, as a societal phenomenon, unique. Therefore, a focal question to ask is what exactly is changing with technology and algorithmisation, and what is not (see e.g., Koulu, 2020a , b ). Such a historically and contextually informed enquiry into algorithmic fairness may produce more systemic knowledge on access to algorithmic justice, in turn laying the foundation for more systemic remedies to existing and novel problems.…”
Section: Access To Justice As a Vantage Pointmentioning
confidence: 99%
“…For example, traditional redress mechanisms are not effortlessly suited to provide legal protection in novel types of conflicts, such as algorithmic discrimination. Furthermore, it is difficult if not impossible to translate fairness and justice, as they are defined by law, into algorithmic systems (see e.g., Koivisto, 2020 ; Koulu, 2020b ; Hakkarainen, 2021 ; Wachter et al, 2021 ). Moreover, the growing reliance on technology can also amplify the digital divide: for people with no access or knowledge to navigate the digital environment it becomes harder than before to partake in processes leading to important decisions concerning them (see e.g., Rabinovich-Einy and Katsh, 2017 ; Wing, 2018 ; Toohey et al, 2019 ).…”
Section: Algorithmisation In the Context Of Technological And Legal S...mentioning
confidence: 99%