2018
DOI: 10.1007/s41125-018-0031-2
|View full text |Cite
|
Sign up to set email alerts
|

On Chances and Risks of Security Related Algorithmic Decision Making Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
12
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 25 publications
(13 citation statements)
references
References 39 publications
1
12
0
Order By: Relevance
“…Today, ADM systems are used in different contexts and support bureaucrats when detecting tax fraud (Botelho and Antunes 2011), assigning future students to universities (Grenet 2018;van Zanten and Legavre 2014), matching job seekers to training schemes (Desiere et al 2019;Fröhlich and Spiecker 2019) or calculating the risk of reoffending for early release from a prison sentence (Berk 2017;Berk et al 2017). While many public administration scholars emphasize the chances that big data and automated pattern detection entails and see the bureaucracy on the path toward "digital era governance" (Margetts and Dunleavy 2013), critical voices emphasize the lack of transparency and accountability of algorithms (e.g., Mittelstadt et al 2016;Ananny and Crawford 2018;Zweig et al 2018). These studies maintain that ADM systems may even produce biased decisions-not only because they incorporate certain values (which may be biased, e.g., Hildebrandt 2016;Yeung 2018), but also because they learn from input data and reproduce the biases found in this data, e.g., concerning ethnic or gender inequalities (Barocas and Selbst 2016;Lepri et al 2018).…”
Section: Algorithms and Public Administration: From The Laboratory To Messy Realitymentioning
confidence: 99%
“…Today, ADM systems are used in different contexts and support bureaucrats when detecting tax fraud (Botelho and Antunes 2011), assigning future students to universities (Grenet 2018;van Zanten and Legavre 2014), matching job seekers to training schemes (Desiere et al 2019;Fröhlich and Spiecker 2019) or calculating the risk of reoffending for early release from a prison sentence (Berk 2017;Berk et al 2017). While many public administration scholars emphasize the chances that big data and automated pattern detection entails and see the bureaucracy on the path toward "digital era governance" (Margetts and Dunleavy 2013), critical voices emphasize the lack of transparency and accountability of algorithms (e.g., Mittelstadt et al 2016;Ananny and Crawford 2018;Zweig et al 2018). These studies maintain that ADM systems may even produce biased decisions-not only because they incorporate certain values (which may be biased, e.g., Hildebrandt 2016;Yeung 2018), but also because they learn from input data and reproduce the biases found in this data, e.g., concerning ethnic or gender inequalities (Barocas and Selbst 2016;Lepri et al 2018).…”
Section: Algorithms and Public Administration: From The Laboratory To Messy Realitymentioning
confidence: 99%
“…By using ML algorithms to forecast crime, predictive models are not constructed according to a specific theory but directly as a result of analysing data with regard to patterns and correlations. Based on an iterative learning, testing and feedback process, the algorithm adjusts the predictive model until a desired predictive quality is reached (Zweig et al., 2018). Ideally, there is no need for any kind of human-made predictive modelling informed by subject-matter theories (Amoore and Raley, 2017: 4).…”
Section: Implications For the Constitution Of Predictive Knowledgementioning
confidence: 99%
“…Accountability and transparency are essential properties in the new era of artificial intelligence, big data, and digitalization. Not only are these properties necessary for algorithms used in the name of security and public policy, but these are increasingly seen also as crucial for sustaining the very foundations of democratic political systems (Zweig et al 2018). Following this reasoning, the basic Premise P 1 from Fig.…”
Section: Discussionmentioning
confidence: 99%