2022
DOI: 10.5465/amr.2019.0470
|View full text |Cite
|
Sign up to set email alerts
|

Substituting Human Decision-Making with Machine Learning: Implications for Organizational Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
41
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 114 publications
(56 citation statements)
references
References 45 publications
0
41
0
Order By: Relevance
“…Despite concentrating on AI’s benefits, we acknowledge potential risks in assisting and even substituting human decision‐making and innovative idea generation. Risks include the myopia of organizational learning (e.g., overlooking long term goal, organizational interdependencies and lack of ability to predict extreme outcomes; see Balasubramanian et al, 2020) and ethical issues (Etzioni & Etzioni, 2017). Future research cannot shy away from this dark side of AI.…”
Section: Discussionmentioning
confidence: 99%
“…Despite concentrating on AI’s benefits, we acknowledge potential risks in assisting and even substituting human decision‐making and innovative idea generation. Risks include the myopia of organizational learning (e.g., overlooking long term goal, organizational interdependencies and lack of ability to predict extreme outcomes; see Balasubramanian et al, 2020) and ethical issues (Etzioni & Etzioni, 2017). Future research cannot shy away from this dark side of AI.…”
Section: Discussionmentioning
confidence: 99%
“…Fourth, when AI serves as a team member [5] [6], the role of emotional trust is increasingly important [4] [7]. Balasubramanian and colleagues [9] note that AI algorithms cannot easily process emotions as such algorithms are designed with formal rationality in mind. We recommend that IS researchers study different human trust dimensions ( i.e.…”
Section: Discussion Of Research Advancesmentioning
confidence: 99%
“…Algorithms are often created and implemented with specific goals predesigned by powerful actors (Kellogg et al, 2020), which may exacerbate agency problems and power imbalances in organizations. As emphasized in the preceding theme relating to bias and opacity in AI, research in this theme also shows that biases are amplified when ML algorithms train on real-world data potentially contaminated with implicit human bias (Balasubramanian et al, 2020;Choudhury et al, 2020). Moreover, organizations that adopt ML to substitute human decision-making may suffer from sub-optimal results and learning myopia if they rely solely on algorithmic decisions (Balasubramanian et al, 2020;Blohm et al, 2020; D. T. Newman et al, 2020).…”
Section: Theme: Organizational Impact Of Ai Adoptionmentioning
confidence: 94%