2022
DOI: 10.1192/bjp.2022.37
|View full text |Cite
|
Sign up to set email alerts
|

Towards personalised predictive psychiatry in clinical practice: an ethical perspective

Abstract: Summary Personalised prediction models promise to enhance the speed, accuracy and objectivity of clinical decision-making in psychiatry in the near future. This editorial elucidates key ethical issues at stake in the real-world implementation of prediction models and sets out practical recommendations to begin to address these.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 5 publications
0
4
0
Order By: Relevance
“…10 Given the desire to promote the well-being of their patients, some clinicians may perceive an epistemic obligation to align their clinical judgements with the algorithmic outputs in the interest of high-quality evidence-based decision-making. 11 12 The hope is that AI and digital technologies will help promote improved access to treatment and quality of care. 13 Early work has focused on tools like conversational AI (ie, chatbots) to provide cognitive behavioural therapy and more integrated digital care delivery systems, both of which remain in their infancy and have been met with challenges with uptake and implementation.…”
Section: Introductionmentioning
confidence: 99%
“…10 Given the desire to promote the well-being of their patients, some clinicians may perceive an epistemic obligation to align their clinical judgements with the algorithmic outputs in the interest of high-quality evidence-based decision-making. 11 12 The hope is that AI and digital technologies will help promote improved access to treatment and quality of care. 13 Early work has focused on tools like conversational AI (ie, chatbots) to provide cognitive behavioural therapy and more integrated digital care delivery systems, both of which remain in their infancy and have been met with challenges with uptake and implementation.…”
Section: Introductionmentioning
confidence: 99%
“…Previous studies have also demonstrated that using statistical modelling of patients' medical health records to improve the identification of certain mental health problems (most commonly using ‘deep learning’ – a form of artificial intelligence) has been effective in identifying mental health problems (Pham et al, 2017 ; Su et al, 2020 ). However, it is important to be aware of ethical implications of such prediction models as they can undermine patients' and clinicians' sense of agency, and shared decision making (Lane & Broome, 2022 ). Finally, previous research also identified systemic barriers related to the early identification of mental health problems in primary care, such as limited consultation time and long waiting times for specialist services (e.g., O'Brien et al, 2016 ), indicating the need for systemic changes in primary care.…”
Section: Resultsmentioning
confidence: 99%
“…However, obstacles, including issues of accuracy, practical feasibility and ethical acceptability, remain. 32…”
Section: Future Directionsmentioning
confidence: 99%