2022
DOI: 10.1177/20539517221111361
|View full text |Cite
|
Sign up to set email alerts
|

Toward a sociology of machine learning explainability: Human–machine interaction in deep neural network-based automated trading

Abstract: Machine learning systems are making considerable inroads in society owing to their ability to recognize and predict patterns. However, the decision-making logic of some widely used machine learning models, such as deep neural networks, is characterized by opacity, thereby rendering them exceedingly difficult for humans to understand and explain and, as a result, potentially risky to use. Considering the importance of addressing this opacity, this paper calls for research that studies empirically and theoretica… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 24 publications
(11 citation statements)
references
References 53 publications
0
11
0
Order By: Relevance
“…The introduction of data-driven algorithms in the domain of medical imaging is one of the leading areas of technological development. A distinguishing characteristic of new algorithms is their black-box character: the complexity of understanding the relations between inputs and outputs 1 . Especially in the medical context, this has major implications for the development and deployment of these algorithms since medical decisions are high stakes and carry strict legal liabilities 2 , 3 .…”
Section: Introductionmentioning
confidence: 99%
“…The introduction of data-driven algorithms in the domain of medical imaging is one of the leading areas of technological development. A distinguishing characteristic of new algorithms is their black-box character: the complexity of understanding the relations between inputs and outputs 1 . Especially in the medical context, this has major implications for the development and deployment of these algorithms since medical decisions are high stakes and carry strict legal liabilities 2 , 3 .…”
Section: Introductionmentioning
confidence: 99%
“…Additionally, some machine learning models, such as deep neural networks, are complex and challenging to interpret. The lack of interpretability can be a concern in critical decision-making processes [90]. Radar data accuracy can be affected by the terrain and urban environments, leading to errors in flood mapping.…”
Section: Discussionmentioning
confidence: 99%
“…Examples of such in-depth, immersive studies in AI-mediated workplaces have been published in this journal and elsewhere (e.g. Borch and Min, 2022; Borg, 2021); such designs should be extended to examine skill requirements.…”
Section: Research Agendamentioning
confidence: 99%
“…Whilst the literature has recognised the importance of skills as a key factor in the development and adoption of AI, there has been paucity of empirical research on the nature of skill requirements in AI-mediated workplaces. Recent publications, including in this journal, have highlighted the skill gaps that arise when humans interact with AI, for example, when workers seek to make sense of AI-generated predictions (Borch and Min, 2022) or when they validate AI's decisions without understanding the basis of the decisions (Anthony, 2021). The lack of understanding of skill requirements, particularly the scarcity of micro-level data such as AI-focused skill taxonomies, has been recognised as a key barrier to advancing our understanding and measurement of the impact of AI on the future of work more broadly (Frank et al, 2019).…”
Section: Introductionmentioning
confidence: 99%