2020
DOI: 10.1007/s40685-020-00134-w
|View full text |Cite
|
Sign up to set email alerts
|

Discriminated by an algorithm: a systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development

Abstract: Algorithmic decision-making is becoming increasingly common as a new source of advice in HR recruitment and HR development. While firms implement algorithmic decision-making to save costs as well as increase efficiency and objectivity, algorithmic decision-making might also lead to the unfair treatment of certain groups of people, implicit discrimination, and perceived unfairness. Current knowledge about the threats of unfairness and (implicit) discrimination by algorithmic decision-making is mostly unexplored… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
100
0
3

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 200 publications
(104 citation statements)
references
References 96 publications
(176 reference statements)
1
100
0
3
Order By: Relevance
“…We summarized our categorization in Table 1. Köchling & Wehner [15]. stated, in HR recruitment and development there are two fairness types that need to be considered: Objective and subjective fairness perceptions of applicants and employees about the usage of algorithmic recruiting decisions.…”
Section: Existing Ethical Principles For Using Algorithms In Recruitingmentioning
confidence: 99%
See 1 more Smart Citation
“…We summarized our categorization in Table 1. Köchling & Wehner [15]. stated, in HR recruitment and development there are two fairness types that need to be considered: Objective and subjective fairness perceptions of applicants and employees about the usage of algorithmic recruiting decisions.…”
Section: Existing Ethical Principles For Using Algorithms In Recruitingmentioning
confidence: 99%
“…These guidelines were published from technological companies e.g., Microsoft, Google or IBM as well as from governmental institutions e.g., EU Ethics guidelines for trustworthy AI. However, a comprehensive systematic literature review that provide structure for these guidelines for research and practice in the context of recruiting is missing as existing systematic approaches in this field are limited to certain journals [15].…”
Section: Introductionmentioning
confidence: 99%
“…The WHO has recently released a guideline from various industry experts, academics, and public sector officials with an emphasis on the protection of human autonomy, equity, transparency, and sustainability which is indicative of a greater trend by the UN to encourage mindfulness in regard to ML (WHO, 2021). Potential biases that may arise due to disproportionate representation of minority groups is one of the endemic problems that arise from current ML, commonly referred to as "Algorithmic Discrimination" (Köchling and Wehner, 2020). For example, Google's facial recognition algorithm was widely criticized for its appalling identification of Black people as apes in 2015; they promptly "fixed" the issue by preventing the algorithm from classifying gorillas (Mulshine, 2015).…”
Section: Ethical Perspectivesmentioning
confidence: 99%
“…Thus, a vector space constructed using such documents will also represent such a bias (Caliskan et al 2017;Garg et al 2018). These biases have been seen as a hindrance to the effectiveness of using embeddings for social interaction applications, such as their use in candidate selection (Köchling and Wehner 2020). However, other biases inherent in Word Embeddings can in some instances be useful for extracting an underlying concept that has caused such a bias to manifest.…”
Section: An Epistemology Of Word Embeddingsmentioning
confidence: 99%