We describe the design of an automated assessment and training tool for psychotherapists to illustrate challenges with creating interactive machine learning (ML) systems, particularly in contexts where human life, livelihood, and wellbeing are at stake. We explore how existing theories of interaction design and machine learning apply to the psychotherapy context, and identify “contestability” as a new principle for designing systems that evaluate human behavior. Finally, we offer several strategies for making ML systems more accountable to human actors.
We present CORE-MI, an automated evaluation and assessment system that provides feedback to mental health counselors on the quality of their care. CORE-MI is the first system of its kind for psychotherapy, and an early example of applied machine-learning in a human service context. In this paper, we describe the CORE-MI system and report on a qualitative evaluation with 21 counselors and trainees. We discuss the applicability of CORE-MI to clinical practice and explore user perceptions of surveillance, workplace misuse, and notions of objectivity, and system reliability that may apply to automated evaluation systems generally.
Objective Amid electronic health records, laboratory tests, and other technology, office-based patient and provider communication is still the heart of primary medical care. Patients typically present multiple complaints, requiring physicians to decide how to balance competing demands. How this time is allocated has implications for patient satisfaction, payments, and quality of care. We investigate the effectiveness of machine learning methods for automated annotation of medical topics in patient-provider dialog transcripts. Materials and Methods We used dialog transcripts from 279 primary care visits to predict talk-turn topic labels. Different machine learning models were trained to operate on single or multiple local talk-turns (logistic classifiers, support vector machines, gated recurrent units) as well as sequential models that integrate information across talk-turn sequences (conditional random fields, hidden Markov models, and hierarchical gated recurrent units). Results Evaluation was performed using cross-validation to measure 1) classification accuracy for talk-turns and 2) precision, recall, and F1 scores at the visit level. Experimental results showed that sequential models had higher classification accuracy at the talk-turn level and higher precision at the visit level. Independent models had higher recall scores at the visit level compared with sequential models. Conclusions Incorporating sequential information across talk-turns improves the accuracy of topic prediction in patient-provider dialog by smoothing out noisy information from talk-turns. Although the results are promising, more advanced prediction techniques and larger labeled datasets will likely be required to achieve prediction performance appropriate for real-world clinical applications.
The Cognitive Therapy Rating Scale (CTRS) is an observer-rated measure of cognitive behavioral therapy (CBT) treatment fidelity. Although widely used, the factor structure and psychometric properties of the CTRS are not well established. Evaluating the factorial validity of the CTRS may increase its utility for training and fidelity monitoring in clinical practice and research. The current study used multilevel exploratory factor analysis to examine the factor structure of the CTRS in a large sample of therapists (n = 413) and observations (n = 1264) from community-based CBT training. Examination of model fit and factor loadings suggested that three within-therapist factors and one between-therapist factor provided adequate fit and the most parsimonious and interpretable factor structure. The three within-therapist factors included items related to (a) session structure, (b) CBT-specific skills and techniques, and (c) therapeutic relationship skills, although three items showed some evidence of cross-loading. All items showed moderate to high
The COVID-19 pandemic transformed the delivery of psychological services as many psychologists adopted telepsychology for the first time or dramatically increased their use of it. The current study examined qualitative and quantitative data provided by 2619 practicing psychologists to identify variables facilitating and impeding the adoption of telepsychology in the U.S. at the beginning of the COVID-19 pandemic. The top five reported barriers were: inadequate access to technology, diminished therapeutic alliance, technological issues, diminished quality of delivered care or effectiveness, and privacy concerns. The top five reported facilitators were: increased safety, better access to patient care, patient demand, efficient use of time, and adequate technology for telepsychology use. Psychologists’ demographic and practice characteristics robustly predicted their endorsement of telepsychology barriers and facilitators. These findings provide important context into the implementation of telepsychology at the beginning of the pandemic and may serve future implementation strategies in clinics and healthcare organizations attempting to increase telepsychology utilization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.