2022
DOI: 10.1136/jme-2022-108447
|View full text |Cite
|
Sign up to set email alerts
|

Evidence, ethics and the promise of artificial intelligence in psychiatry

Abstract: Researchers are studying how artificial intelligence (AI) can be used to better detect, prognosticate and subgroup diseases. The idea that AI might advance medicine’s understanding of biological categories of psychiatric disorders, as well as provide better treatments, is appealing given the historical challenges with prediction, diagnosis and treatment in psychiatry. Given the power of AI to analyse vast amounts of information, some clinicians may feel obligated to align their clinical judgements with the out… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

1
17
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 39 publications
(18 citation statements)
references
References 79 publications
1
17
0
Order By: Relevance
“…AI could potentially facilitate data interpretation and clinical decisions, although privacy, algorithm bias, and transparency issues exist. Nonetheless, evidence supports their therapeutic potential in psychiatry …”
Section: Types Of Biomarkers In Psychiatrymentioning
confidence: 99%
See 3 more Smart Citations
“…AI could potentially facilitate data interpretation and clinical decisions, although privacy, algorithm bias, and transparency issues exist. Nonetheless, evidence supports their therapeutic potential in psychiatry …”
Section: Types Of Biomarkers In Psychiatrymentioning
confidence: 99%
“…Furthermore, the use of AI in psychiatry is in its infancy. Many challenges remain, including data privacy, ethical considerations, and the need for more representative and high-quality data …”
Section: Types Of Biomarkers In Psychiatrymentioning
confidence: 99%
See 2 more Smart Citations
“…These can include producing draft notes and reports in clinical scenarios, through to more complicated tasks such as providing chat-bot psychotherapy. Nevertheless, there are many risks which are taken when using the service, ranging from information inaccuracy, 3 breaching confidentiality 4 or providing inappropriate therapeutic advice 5 ; all which have the potential to harm patients.…”
mentioning
confidence: 99%