2021
DOI: 10.1080/03080188.2020.1840223
|View full text |Cite
|
Sign up to set email alerts
|

Clinical translation of computational brain models: understanding the salience of trust in clinician–researcher relationships

Abstract: Computational brain models use machine learning algorithms and statistical models to harness big data for delivering disease-specific diagnosis or prognosis for individuals. They are intended to support clinical decision making and are widely available. However, their translation into clinical practice remains weak despite efforts to improve implementation such as through training clinicians and clinical staff in their use and benefits. In this paper, we argue that it is necessary to go beyond existing impleme… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 14 publications
(22 citation statements)
references
References 52 publications
0
22
0
Order By: Relevance
“…Two studies had a contextual perspective and focused on trust as relational between people in the context of the AI application rather than having trust in the technology itself. Datta Burton et al ( 38 ) argued that it is necessary to develop the human side of these tools, which represents a triangle of trust relationships: between patients and clinicians, and between clinicians and researchers. Esmaeilzadeh et al ( 41 ) focused on care encounters and understood trust as the degree to which an individual believes that the clinical encounter is trustworthy and referred to Reddy et al ( 42 ) who understood trust as “Trust is in the clinicians and the clinical tools they use”.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Two studies had a contextual perspective and focused on trust as relational between people in the context of the AI application rather than having trust in the technology itself. Datta Burton et al ( 38 ) argued that it is necessary to develop the human side of these tools, which represents a triangle of trust relationships: between patients and clinicians, and between clinicians and researchers. Esmaeilzadeh et al ( 41 ) focused on care encounters and understood trust as the degree to which an individual believes that the clinical encounter is trustworthy and referred to Reddy et al ( 42 ) who understood trust as “Trust is in the clinicians and the clinical tools they use”.…”
Section: Resultsmentioning
confidence: 99%
“…Knowledge and technological skills were found to influence trust in AI ( n = 5) , which emphasized the need for education and training ( 49 ). Four studies understood trust as influenced by earlier usage experience or technological skills ( 38 , 43 , 45 , 46 ), e.g., radiologists were used to highly complex machines in their routine clinical practice, and ease of use may therefore not be a concern in the adoption-related decision making ( 46 ). Personal traits such as cognition and having a positive attitude were associated with higher levels of trust ( n = 3 ), e.g., disposition to trust technology was related to trust in AI use ( 43 , 46 ), and understood as influenced by the individual's cognition and personality ( 46 ).…”
Section: Resultsmentioning
confidence: 99%
“…Between 2016 and 2018, we worked on developing a framework for understanding dual use and misuse issues that go beyond the dualistic military/civilian applications of neuroscience research and neurotechnologies (Aicardi et al 2021; Mahfoud et al 2018). And later between 2018 and 2020, we worked on the ethical implications of brain-inspired computing and Artificial Intelligence (AI), as well as the clinical translation of computational brain models (Burton et al 2021; Ethics & Society et al 2021).…”
Section: Methodsmentioning
confidence: 99%
“…Again, the politics of expertise and science are bound up with the politics of different kinds of interventions and social transformations (Martin 2006), or lack thereof. While physicians may be concerned with the status of their expertise vis-à-vis AI technologies in clinical settings, as discussed elsewhere in this issue (Hanemaayer; Burton et al, this issue), expertise pertaining to (or impinging on) health equity extends far beyond clinics; it encompasses the public health field as well as broader political, economic and social policy-making, as recognized in ‘Health in All Policies’ frameworks (Rudolph et al 2013). Building trust and investment in AI at one level, such as in clinical care, may mean further entrenching hierarchies and inequalities that diminish health equity overall (Pūras 2020), even if some patients receive high-quality care.…”
Section: The Politics Of Data and Expertise In The Computational Turnmentioning
confidence: 99%