ObjectivesFairness is a core concept meant to grapple with different forms of discrimination and bias that emerge with advances in Artificial Intelligence (eg, machine learning, ML). Yet, claims to fairness in ML discourses are often vague and contradictory. The response to these issues within the scientific community has been technocratic. Studies either measure (mathematically) competing definitions of fairness, and/or recommend a range of governance tools (eg, fairness checklists or guiding principles). To advance efforts to operationalise fairness in medicine, we synthesised a broad range of literature.MethodsWe conducted an environmental scan of English language literature on fairness from 1960-July 31, 2021. Electronic databases Medline, PubMed and Google Scholar were searched, supplemented by additional hand searches. Data from 213 selected publications were analysed using rapid framework analysis. Search and analysis were completed in two rounds: to explore previously identified issues (a priori), as well as those emerging from the analysis (de novo).ResultsOur synthesis identified ‘Three Pillars for Fairness’: transparency, impartiality and inclusion. We draw on these insights to propose a multidimensional conceptual framework to guide empirical research on the operationalisation of fairness in healthcare.DiscussionWe apply the conceptual framework generated by our synthesis to risk assessment in psychiatry as a case study. We argue that any claim to fairness must reflect critical assessment and ongoing social and political deliberation around these three pillars with a range of stakeholders, including patients.ConclusionWe conclude by outlining areas for further research that would bolster ongoing commitments to fairness and health equity in healthcare.
IntroductionManaging violence or aggression is an ongoing challenge in emergency psychiatry. Many patients identified as being at risk do not go on to become violent or aggressive. Efforts to automate the assessment of risk involve training machine learning (ML) models on data from electronic health records (EHRs) to predict these behaviours. However, no studies to date have examined which patient groups may be over-represented in false positive predictions, despite evidence of social and clinical biases that may lead to higher perceptions of risk in patients defined by intersecting features (eg, race, gender). Because risk assessment can impact psychiatric care (eg, via coercive measures, such as restraints), it is unclear which patients might be underserved or harmed by the application of ML.Methods and analysisWe pilot a computational ethnography to study how the integration of ML into risk assessment might impact acute psychiatric care, with a focus on how EHR data is compiled and used to predict a risk of violence or aggression. Our objectives include: (1) evaluating an ML model trained on psychiatric EHRs to predict violent or aggressive incidents for intersectional bias; and (2) completing participant observation and qualitative interviews in an emergency psychiatric setting to explore how social, clinical and structural biases are encoded in the training data. Our overall aim is to study the impact of ML applications in acute psychiatry on marginalised and underserved patient groups.Ethics and disseminationThe project was approved by the research ethics board at The Centre for Addiction and Mental Health (053/2021). Study findings will be presented in peer-reviewed journals, conferences and shared with service users and providers.
Prolonged wait times in healthcare are a complex issue that can negatively impact both clients and staff. Longer wait times are often caused by a number of factors such as overly complicated scheduling, inefficient use of resources, extraneous processes, and misalignment of supply and demand. Growing evidence suggests a correlation between wait times and client satisfaction. This relationship, however, is complex. Some research suggests that client satisfaction with wait times may be improved with interventions that enhance the waiting experience and not actual wait times. This project aimed to improve the average daily rating of the client waiting experience by 1 point on a 7-point Likert scale.A quality improvement study was conducted to analyse client satisfaction with wait times and enhance clients’ satisfaction while waiting. Quality improvement methods, mainly co-design sessions, were used to co-create and implement an intervention to improve clients’ experience with waiting in the clinic.The project resulted in the implementation of a whiteboard intervention in the clinic to inform clients where they are in the queue. The whiteboard also included static data summarising the average wait times from the previous month. Both aspects of the whiteboard were designed to allow patients to better approximate their wait times. Though the quantitative analysis did not reveal a 1-point improvement on a 7-point Likert scale, the feedback from staff and clients was positive. Since implementation, clinic staff and management have developed the intervention into a high-fidelity digital board that is still in use today. Furthermore, the use of the intervention has been extended locally, with additional ambulatory clinics at the hospital planning to use the set-up in their clinic waiting rooms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.