2019
DOI: 10.2139/ssrn.3391266
|View full text |Cite
|
Sign up to set email alerts
|

Accountability of Algorithms in the GDPR and beyond: A European Legal Framework on Automated Decision-Making

Abstract: Automated decision systems appear to carry higher risks today than they ever have before. Digital technologies collect massive amounts of data and evaluate people in every aspect of their lives, such as housing and employment. This collected information is ranked through the use of algorithms. The use of such algorithms may be problematic. Because the results obtained through

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 1 publication
0
10
0
Order By: Relevance
“…Further validation of the risk profiling system will be provided by sharing with the GPs the information collected through the Dress-KINESIS (baseline data, risk level, and follow-up data) about the GPs’ patients who provided explicit consent. This validation limits the criticisms correlated to a mere algorithmic decision-making process as bias, opacity, and risk of discrimination [ 39 ], according to the Articles 4 and 22 and the Recital 71 of the GDPR about profiling.…”
Section: Resultsmentioning
confidence: 99%
“…Further validation of the risk profiling system will be provided by sharing with the GPs the information collected through the Dress-KINESIS (baseline data, risk level, and follow-up data) about the GPs’ patients who provided explicit consent. This validation limits the criticisms correlated to a mere algorithmic decision-making process as bias, opacity, and risk of discrimination [ 39 ], according to the Articles 4 and 22 and the Recital 71 of the GDPR about profiling.…”
Section: Resultsmentioning
confidence: 99%
“…The increasing availability of data and sophistication of algorithms (including the rebirth of machine learning / neural networks [19]) has enabled more uses and misuses of algorithmically controlled, automated decision-making (ADM, for short) [15]. The scaling of such innovations has happened in a context where algorithms, which still bear little accountability [8], [20], are involved in automating news recommendations [21], [22], advertising [23], but also work [24], [25][26] and other highly sensitive processes such as social ranking, crime prediction, and bail, parole and criminal sentencing [27]- [29] .…”
Section: Automated Decision-making (Adm)mentioning
confidence: 99%
“…Considering this new reality, some efforts have been made to regulate ADM, such as article 22 of the General Data Protection Regulation (GDPR) and Article 29 Working Party's Guidelines on automated decisionmaking and profiling in Europe. However, these existing frameworks remain far from perfect at adequately addressing problems of opacity and discrimination related to machine learning processing and the explanations of automated decision-making [8].…”
Section: Introductionmentioning
confidence: 99%
“…Critics claim that the legal framework provides too much freedom to data controllers and insufficiently protects individuals [94]. Furthermore, as ADM systems are very complex, the information should be presented in a comprehensible manner for each individual, and the system's "intentions" made clear [94]. What is problematic is that, not all parties involved have a right to an explanation, for instance, the general public.…”
Section: Human-centric Adm Systems For Credit Scoringmentioning
confidence: 99%
“…What is problematic is that, not all parties involved have a right to an explanation, for instance, the general public. Providing the public with information concerning the ADM systems' functioning, however, would be beneficial to reduce public concerns and to improve individuals' understanding of the use of their data [94]. By increasing knowledge about these systems, social accountability could be created, and influence could be exercised to make these systems more human-centric.…”
Section: Human-centric Adm Systems For Credit Scoringmentioning
confidence: 99%