The domains describe the range and types of personal values multimorbid older adults deem important to well-being and health. Understanding patients' personal values across these domains may be useful for providers when developing, sharing, and following up on care plans.
With the growing prevalence of psychological interventions, it is vital to have measures which rate the effectiveness of psychological care to assist in training, supervision, and quality assurance of services. Traditionally, quality assessment is addressed by human raters who evaluate recorded sessions along specific dimensions, often codified through constructs relevant to the approach and domain. This is, however, a cost-prohibitive and time-consuming method that leads to poor feasibility and limited use in real-world settings. To facilitate this process, we have developed an automated competency rating tool able to process the raw recorded audio of a session, analyzing who spoke when, what they said, and how the health professional used language to provide therapy. Focusing on a use case of a specific type of psychotherapy called "motivational interviewing", our system gives comprehensive feedback to the therapist, including information about the dynamics of the session (e.g., therapist's vs. client's talking time), low-level psychological language descriptors (e.g., type of questions asked), as well as other high-level behavioral constructs (e.g., the extent to which the therapist understands the clients' perspective). We describe our platform and its performance using a dataset of more than 5000 recordings drawn from its deployment in a real-world clinical setting used to assist training of new therapists. Widespread use of automated psychotherapy rating tools may augment experts' capabilities by providing an avenue for more effective training and skill improvement, eventually leading to more positive clinical outcomes.
The term 'critical design' is on the upswing in HCI. We analyze how discourses around 'critical design' are diverging in Design and HCI. We argue that this divergence undermines HCI's ability to learn from and appropriate the design approaches signaled by this term. Instead, we articulate two ways to broaden and deepen connections between Design and HCI: (1) develop a broader collective understanding of what these design approaches can be, without forcing them to be about 'criticality' or 'critical design,' narrowly construed; and (2) shape a variation of design criticism to better meet Design practices, terms, and ways of knowing.
Direct observation of psychotherapy and providing performance-based feedback is the gold-standard approach for training psychotherapists. At present, this requires experts and training human coding teams, which is slow, expensive, and labor intensive. Machine learning and speech signal processing technologies provide a way to scale up feedback in psychotherapy. We evaluated an initial proof of concept automated feedback system that generates motivational interviewing quality metrics and provides easy access to other session data (e.g., transcripts). The system automatically provides a report of session-level metrics (e.g., therapist empathy) and therapist behavior codes at the talk-turn level (e.g., reflections). We assessed usability, therapist satisfaction, perceived accuracy, and intentions to adopt. A sample of 21 novice (n ϭ 10) or experienced (n ϭ 11) therapists each completed a 10-min session with a standardized patient. The system received the audio from the session as input and then automatically generated feedback that therapists accessed via a web portal. All participants found the system easy to use and were satisfied with their feedback, 83% found the feedback consistent with their own perceptions of their clinical performance, and 90% reported they were likely to use the feedback in their practice. We discuss the implications of applying new technologies to evaluation of psychotherapy. Clinical Impact StatementQuestion: How do therapists experience automated evaluations of their sessions? Findings: Therapists endorsed strong satisfaction, usability, and perceived accuracy of the automated feedback. Meaning: Machine-learning technologies have the potential to dramatically scale up the amount of feedback therapists receive after their sessions. Next Steps: Building on this pilot study, both usability and accuracy should be tested in larger and different types of therapist samples. Additional work should focus on the potential impact of automated feedback on therapist behavior in session.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.