Imel are cofounders with equity stake in a technology company, Lyssn.io, focused on tools to support training, supervision, and quality assurance of psychotherapy and counseling. Shrikanth S. Narayanan is chief scientist and co-founder with equity stake of Behavioral Signals, a technology company focused on creating technologies for emotional and behavioral machine intelligence. The remaining authors report no conflicts of interest.
Studies on therapist factors have mostly focused on therapist traits rather than states such as affect. Research related to therapist affect has often looked at therapist baseline well-being or therapist reactions, but not both. Fifteen therapists and 51 clients rated pre- and postsession affect, as well as postsession working alliance and session quality, for 1,172 sessions of individual psychotherapy at a community clinic. Therapists' affect became more positive when clients were initially positive and when clients became more positive over the session, and became more negative when clients were initially negative and when clients became more negative over the session. Furthermore, when therapists were initially positive in affect and when therapists became more positive over the session, clients rated the session quality to be high. Conversely, when therapists were initially negative in affect and when therapists became more negative over the session, clients rated the session quality and working alliance low. On open-ended questions, therapists reported mood shifts in 67% of sessions (63% positive, 50% negative). Positive affect change was attributed to collaborating with the client, perceiving the client to be engaged, or being a good therapist. Negative affect change was attributed to having a difficult client, perceiving the client to be in distress, or being a poor therapist. Thus, therapist state affect at presession and change in affect across a session may independently contribute to the process and outcome of therapy sessions. The examination of within-therapist variables over the course of therapy may further our understanding of therapist factors. (PsycINFO Database Record
Emotional distress is a common reason for seeking psychotherapy, and sharing emotional material is central to the process of psychotherapy. However, systematic research examining patterns of emotional exchange that occur during psychotherapy sessions is often limited in scale. Traditional methods for identifying emotion in psychotherapy rely on labor-intensive observer ratings, client or therapist ratings obtained before or after sessions, or involve manually extracting ratings of emotion from session transcripts using dictionaries of positive and negative words that do not take the context of a sentence into account. However, recent advances in technology in the area of machine learning algorithms, in particular natural language processing, have made it possible for mental health researchers to identify sentiment, or emotion, in therapist-client interactions on a large scale that would be unattainable with more traditional methods. As an attempt to extend prior findings from Tanana et al. ( 2016), we compared their previous sentiment model with a common dictionary-based psychotherapy model, LIWC, and a new NLP model, BERT. We used the human ratings from a database of 97,497 utterances from psychotherapy to train the BERT model. Our findings revealed that the unigram sentiment model (kappa = 0.31) outperformed LIWC (kappa = 0.25), and ultimately BERT outperformed both models (kappa = 0.48).Keywords Emotion . Natural language processing . Psychotherapy process . Emotion coding . Sentiment analysis Psychotherapy involves goal-directed conversations where people are able to explore their emotions, experiences, and distress. For over a century, researchers and practitioners have consistently acknowledged the central role emotions play in psychotherapy (
We present CORE-MI, an automated evaluation and assessment system that provides feedback to mental health counselors on the quality of their care. CORE-MI is the first system of its kind for psychotherapy, and an early example of applied machine-learning in a human service context. In this paper, we describe the CORE-MI system and report on a qualitative evaluation with 21 counselors and trainees. We discuss the applicability of CORE-MI to clinical practice and explore user perceptions of surveillance, workplace misuse, and notions of objectivity, and system reliability that may apply to automated evaluation systems generally.
Objective Amid electronic health records, laboratory tests, and other technology, office-based patient and provider communication is still the heart of primary medical care. Patients typically present multiple complaints, requiring physicians to decide how to balance competing demands. How this time is allocated has implications for patient satisfaction, payments, and quality of care. We investigate the effectiveness of machine learning methods for automated annotation of medical topics in patient-provider dialog transcripts. Materials and Methods We used dialog transcripts from 279 primary care visits to predict talk-turn topic labels. Different machine learning models were trained to operate on single or multiple local talk-turns (logistic classifiers, support vector machines, gated recurrent units) as well as sequential models that integrate information across talk-turn sequences (conditional random fields, hidden Markov models, and hierarchical gated recurrent units). Results Evaluation was performed using cross-validation to measure 1) classification accuracy for talk-turns and 2) precision, recall, and F1 scores at the visit level. Experimental results showed that sequential models had higher classification accuracy at the talk-turn level and higher precision at the visit level. Independent models had higher recall scores at the visit level compared with sequential models. Conclusions Incorporating sequential information across talk-turns improves the accuracy of topic prediction in patient-provider dialog by smoothing out noisy information from talk-turns. Although the results are promising, more advanced prediction techniques and larger labeled datasets will likely be required to achieve prediction performance appropriate for real-world clinical applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.