2018
DOI: 10.1001/jamainternmed.2018.3763
|View full text |Cite
|
Sign up to set email alerts
|

Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data

Abstract: A promise of machine learning in health care is the avoidance of biases in diagnosis and treatment; a computer algorithm could objectively synthesize and interpret the data in the medical record. Integration of machine learning with clinical decision support tools, such as computerized alerts or diagnostic support, may offer physicians and others who provide health care targeted and timely information that can improve clinical decisions. Machine learning algorithms, however, may also be subject to biases. The … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
524
1
10

Year Published

2019
2019
2021
2021

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 942 publications
(595 citation statements)
references
References 24 publications
5
524
1
10
Order By: Relevance
“…There are additional ethical challenges in machine learning that are described in more detail in related reviews. 107,109 Teams building machine learning products need to consider these challenges early and often and incorporate ethical and legal perspectives into their work.…”
Section: Challenges and Opportunitiesmentioning
confidence: 99%
“…There are additional ethical challenges in machine learning that are described in more detail in related reviews. 107,109 Teams building machine learning products need to consider these challenges early and often and incorporate ethical and legal perspectives into their work.…”
Section: Challenges and Opportunitiesmentioning
confidence: 99%
“…However, as attention turns to use of machine learning algorithms in healthcare, organizations must be aware that machine learning applied to clinical decision support systems can also be subject to important societal biases, and if used incorrectly, can amplify healthcare disparities (Gianfrancesco, Tamang, Tazdany, & Schmajuk, 2018). This was true in a widely reported case in 2019 when a machine learning algorithm used by many insurers incorporated a faulty metric to determine which patients were high-risk and qualified for additional care management (Obermeier, Powers, Vogeli, & Mullainathan, 2019).…”
Section: Artificial Intelligencementioning
confidence: 99%
“…Finally, a completely different type of problem, but also important, is how to reduce the biased datasets or heuristics we provide to our DL systems [46] as well as how to control the biases that make us not able to interpret DL results properly [47]. Obviously, if there is any malicious value related to such bias, it must be also controlled.…”
Section: Extending Bad And/or Good Human Cognitive Skills Through DLmentioning
confidence: 99%