2019
DOI: 10.1002/hast.977
|View full text |Cite
|
Sign up to set email alerts
|

Deep Ethical Learning: Taking the Interplay of Human and Artificial Intelligence Seriously

Abstract: From predicting medical conditions to administering health behavior interventions, artificial intelligence technologies are being developed to enhance patient care and outcomes. However, as Mélanie Terrasse and coauthors caution in an article in this issue of the Hastings Center Report, an overreliance on virtual technologies may depersonalize medical interactions and erode therapeutic relationships. The increasing expectation that patients will be actively engaged in their own care, regardless of the patients… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
8
2

Relationship

1
9

Authors

Journals

citations
Cited by 24 publications
(20 citation statements)
references
References 14 publications
0
19
0
1
Order By: Relevance
“…Ethics issues identified in the articles containing the term disab*, impair* or deaf in the abstracts Only n = 11 were relevant (Abascal & Azevedo, 2007;Adams, Encarnação, Rios-Rincón, & Cook, 2018;Bruhn, Homann, & Renzelberg, 2006;Busnel & Giroux, 2010;Fosch-Villaronga & Albo-Canals, 2019;Hersh, 2016;Molina-Carmona, Satorre-Cuerda, Villagrá-Arnedo, & Compañ-Rosique, 2017;Panek et al, 2004;Ramanathan, Sangeetha, Talwai, & Natarajan, 2018;Salvini, Datteri, Laschi, & Dario, 2008;Satterfield & Fabri, 2017). Authors were from UK (n = 3 times); Canada, Portugal, Spain, USA (each n = 2 times) and Germany, Columbia and India (n = 1 times).…”
Section: Qualitative Analysismentioning
confidence: 99%
“…Ethics issues identified in the articles containing the term disab*, impair* or deaf in the abstracts Only n = 11 were relevant (Abascal & Azevedo, 2007;Adams, Encarnação, Rios-Rincón, & Cook, 2018;Bruhn, Homann, & Renzelberg, 2006;Busnel & Giroux, 2010;Fosch-Villaronga & Albo-Canals, 2019;Hersh, 2016;Molina-Carmona, Satorre-Cuerda, Villagrá-Arnedo, & Compañ-Rosique, 2017;Panek et al, 2004;Ramanathan, Sangeetha, Talwai, & Natarajan, 2018;Salvini, Datteri, Laschi, & Dario, 2008;Satterfield & Fabri, 2017). Authors were from UK (n = 3 times); Canada, Portugal, Spain, USA (each n = 2 times) and Germany, Columbia and India (n = 1 times).…”
Section: Qualitative Analysismentioning
confidence: 99%
“…Furthermore, any perpetuated biases incorporated into a ML-HCA may subsequently impact clinical decisions and support self-fulfilling prophesies. For example, if clinicians currently routinely de-escalate or withhold interventions in patients with specific severe injuries or progressive conditions, ML systems may classify such clinical scenarios as nearly always fatal, and any ML-HCA built on such a classification would likely result in an even higher likelihood of de-escalation or withholding, thereby reducing the opportunity to improve outcomes for such conditions (Begoli et al 2019;Fiske et al 2019;Nabi 2018;Cohen et al 2014;Ho 2019;Taljaard et al 2014). Training of ML-HCAs against real world data, rather than high-quality research-grade data, may simply perpetuate suboptimal clinical practices that are not aligned with the best scientific evidence.…”
Section: Development: Perpetuation Of Bias Within Training Data Riskmentioning
confidence: 99%
“…Nonetheless, opportunities brought on by emerging AI health monitoring technologies raise ethical questions that must be addressed to ensure that these automated systems can truly enhance care and health outcomes for older adults [31]. While AI technologies could theoretically facilitate early detection of declines and enable timely intervention, a systematic review studying sensor monitoring as a method to measure and support daily functioning for older adults living independently at home shows that there is currently only limited evidence of such effectiveness due to a lack of high methodological quality in relevant studies, and that most of these technologies are still in early stages of development or refinement [32].…”
Section: Intersecting Ethical Considerations Of Using Ai Health Monitmentioning
confidence: 99%