2021
DOI: 10.1186/s13244-020-00955-7
|View full text |Cite
|
Sign up to set email alerts
|

Not all biases are bad: equitable and inequitable biases in machine learning and radiology

Abstract: The application of machine learning (ML) technologies in medicine generally but also in radiology more specifically is hoped to improve clinical processes and the provision of healthcare. A central motivation in this regard is to advance patient treatment by reducing human error and increasing the accuracy of prognosis, diagnosis and therapy decisions. There is, however, also increasing awareness about bias in ML technologies and its potentially harmful consequences. Biases refer to systematic distortions of d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
30
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 43 publications
(32 citation statements)
references
References 41 publications
0
30
0
Order By: Relevance
“…osteogenesis imperfecta). This could be due to fewer children within these categories attending emergency departments to provide the necessary imaging data for training AI models, but the result is that speci c paediatric populations may be unintentionally marginalised or poorly served by such new technologies and raises potential ethical considerations about their future usage particularly when performance characteristics are extrapolated beyond the population on which the tool was developed and validated [54]. An example would be an AI tool which could help to evaluate the particular aspects of fractures relating to suspected physical abuse as an adjunct to clinical practice given that many practising paediatric radiologists do not feel appropriately trained or con dent in this aspect of imaging assessment [55][56][57][58].…”
Section: Algorithm Diagnostic Accuracy Ratesmentioning
confidence: 99%
“…osteogenesis imperfecta). This could be due to fewer children within these categories attending emergency departments to provide the necessary imaging data for training AI models, but the result is that speci c paediatric populations may be unintentionally marginalised or poorly served by such new technologies and raises potential ethical considerations about their future usage particularly when performance characteristics are extrapolated beyond the population on which the tool was developed and validated [54]. An example would be an AI tool which could help to evaluate the particular aspects of fractures relating to suspected physical abuse as an adjunct to clinical practice given that many practising paediatric radiologists do not feel appropriately trained or con dent in this aspect of imaging assessment [55][56][57][58].…”
Section: Algorithm Diagnostic Accuracy Ratesmentioning
confidence: 99%
“…Author details 1 Centre Antoine Lacassagne, 33 Avenue de Valombrose, 06100 Nice, France. 2 Median Technologies, 1800 route des crêtes, 06560 Valbonne, France.…”
Section: Authors' Contributionsmentioning
confidence: 99%
“…Dear Editor in Chief, We read with interest the article entitled 'Not all biases are bad: equitable and inequitable biases in machine learning and radiology' by Pot et al [1] recently published in Insights into Imaging.…”
Section: Introductionmentioning
confidence: 99%
“…Iannessi et al [ 1 ] commented on our paper “Not all biases are bad: equitable and inequitable biases in machine learning and radiology” [ 2 ]. We thank the authors for their critique.…”
Section: Introductionmentioning
confidence: 99%