2022
DOI: 10.5465/ambpp.2022.12392abstract
|View full text |Cite
|
Sign up to set email alerts
|

Adherence to Unethical Instructions from AI Supervisors: Combining Experiments with Machine Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…While delegation to AI can improve managerial perceptions of decision quality (Keding & Meissner, 2021), emerging research suggests that employees may have different views and responses to humans or AIs undertaking leadership functions (de Cremer, 2020). For instance, Lanz et al (2023) note that employees are less likely to comply with unethical instructions from an AI than a human supervisor. Future work could examine how those individual‐level beliefs, of leaders, employees, and stakeholders, shape the use and acceptance of AI for certain leadership activities and ultimately impact leader–employee influence processes.…”
Section: Future Research Directions: Opportunities and Challengesmentioning
confidence: 99%
“…While delegation to AI can improve managerial perceptions of decision quality (Keding & Meissner, 2021), emerging research suggests that employees may have different views and responses to humans or AIs undertaking leadership functions (de Cremer, 2020). For instance, Lanz et al (2023) note that employees are less likely to comply with unethical instructions from an AI than a human supervisor. Future work could examine how those individual‐level beliefs, of leaders, employees, and stakeholders, shape the use and acceptance of AI for certain leadership activities and ultimately impact leader–employee influence processes.…”
Section: Future Research Directions: Opportunities and Challengesmentioning
confidence: 99%
“…Relatedly, third, while initial evidence indicates that humans are less likely to follow the unethical instructions of an AI (versus human) leader (Lanz et al, 2023), at the same time humans seem to experience less moral outrage over algorithmic discrimination than over human discrimination (Bigmann et al, 2022). Accordingly, it contains a risk that decisions of AI leaders that one “has to follow” are used as an excuse for unethical behavior with implications for the potential weakening of collective action to address systematic discrimination and other societal issues (Bigmann et al, 2022).…”
Section: Implications For Leadership Researchmentioning
confidence: 99%
“…In this way, we can hopefully maintain a “human-in-the-loop” pattern (Grønsund & Aanestad, 2020) whereby human leaders still (co-)generate a ground truth against which to assess algorithmic leadership and potentially adapt the underlying AI. Students need to develop a digital backbone in order to stand their ground; against the technology itself when it provides ethically questionable advice (e.g., firing certain employee groups because they underperform, Lanz et al, 2023); against engineers who only see the opportunities of the machine (Köbis et al, 2021); and against a multitude of consultants who want to integrate ever new technologies without considering their broader impact.…”
Section: Implications For Leadership Educationmentioning
confidence: 99%