Motivational interviewing (MI) theory proposes a process whereby a set of therapist behaviors has direct effects on client outcomes as well as indirect effects through in-session processes such as client change talk. Despite clear empirical support for the efficacy of MI across settings, the results of studies evaluating proposed links between MI process and outcome have been less clear. In the present study, we used a series of multivariate meta-analyses to test whether there are differential relationships between specific MI-consistent and MI-inconsistent therapist behaviors, MI therapist global ratings, client change language, and clinical outcomes. Among 19 primary studies (N = 2,614) included in the analysis, we found a significant relationship between MI-consistent therapist behaviors and increased client change talk, yet MI-consistent therapist behaviors were also significantly related to increased client sustain talk. Higher therapist global ratings (empathy and MI spirit) were significantly related to increased MI-consistent behaviors, decreased MI-inconsistent behaviors, increased client change talk, yet also to increased client sustain talk. Therapist global ratings were not significantly related to clinical outcomes. Client sustain talk was a significant predictor of worse clinical outcomes, while client change talk was unrelated to outcome. Variability within the correlations indicated that some individual MI therapist behaviors were differentially related to therapist global ratings of empathy and MI spirit. Similar to past research, present findings confirm some hypothesized MI process outcome relationships, while failing to confirm others. Clinical implications and future areas of research are discussed.
Emotional distress is a common reason for seeking psychotherapy, and sharing emotional material is central to the process of psychotherapy. However, systematic research examining patterns of emotional exchange that occur during psychotherapy sessions is often limited in scale. Traditional methods for identifying emotion in psychotherapy rely on labor-intensive observer ratings, client or therapist ratings obtained before or after sessions, or involve manually extracting ratings of emotion from session transcripts using dictionaries of positive and negative words that do not take the context of a sentence into account. However, recent advances in technology in the area of machine learning algorithms, in particular natural language processing, have made it possible for mental health researchers to identify sentiment, or emotion, in therapist-client interactions on a large scale that would be unattainable with more traditional methods. As an attempt to extend prior findings from Tanana et al. ( 2016), we compared their previous sentiment model with a common dictionary-based psychotherapy model, LIWC, and a new NLP model, BERT. We used the human ratings from a database of 97,497 utterances from psychotherapy to train the BERT model. Our findings revealed that the unigram sentiment model (kappa = 0.31) outperformed LIWC (kappa = 0.25), and ultimately BERT outperformed both models (kappa = 0.48).Keywords Emotion . Natural language processing . Psychotherapy process . Emotion coding . Sentiment analysis Psychotherapy involves goal-directed conversations where people are able to explore their emotions, experiences, and distress. For over a century, researchers and practitioners have consistently acknowledged the central role emotions play in psychotherapy (
We present CORE-MI, an automated evaluation and assessment system that provides feedback to mental health counselors on the quality of their care. CORE-MI is the first system of its kind for psychotherapy, and an early example of applied machine-learning in a human service context. In this paper, we describe the CORE-MI system and report on a qualitative evaluation with 21 counselors and trainees. We discuss the applicability of CORE-MI to clinical practice and explore user perceptions of surveillance, workplace misuse, and notions of objectivity, and system reliability that may apply to automated evaluation systems generally.
Providers' adherence in the delivery of behavioral interventions for substance use disorders is not fixed, but instead can vary across sessions, providers, and intervention sites. This variability can substantially impact the quality of intervention that clients receive. However, there has been limited work to systematically evaluate the extent to which substance use intervention adherence varies from session-to-session, provider-to-provider, and site-to-site. The present study quantifies the extent to which adherence to Motivational Interviewing (MI) for alcohol and drug use varies across sessions, providers, and intervention sites and compares the extent of this variability across three common MI research contexts that evaluate MI efficacy, MI effectiveness, and MI training. Independent raters coded intervention adherence to MI from 1275 sessions delivered by 216 providers at 15 intervention sites. Multilevel models indicated that 57%-94% of the variance in MI adherence was attributable to variability between sessions (i.e., within providers), while smaller proportions of variance were attributable to variability between providers (3%-26%) and between intervention sites (0.1%-28%). MI adherence was typically lowest and most variable within contexts evaluating MI training (i.e., where MI was not protocol-guided and delivered by community treatment providers) and, conversely, adherence was typically highest and least variable in contexts evaluating MI efficacy and effectiveness (i.e., where MI was highly protocolized and delivered by trained therapists). These results suggest that MI adherence in efficacy and effectiveness trials may be substantially different from that obtained in community treatment settings, where adherence is likely to be far more heterogeneous.
The sharing of emotional material is central to the process of psychotherapy and emotional problems are a primary reason for seeking treatment. Surprisingly, very little systematic research has been done on patterns of emotional exchange during psychotherapy. It is likely that a major reason for this void in the research is the enormous cost of annotating sessions for affective content. In the field of NLP, there have been major strides in the creation of algorithms for sentiment analysis, but most of this work has focused on written reviews of movies and twitter feeds with little work on spoken dialogue. We have created a new database of 97,497 utterances from psychotherapy transcripts labeled by humans for sentiment. We describe this dataset and present initial results for models identifying sentiment. We also show that one of the best models from the literature, trained on movie reviews, performed below many of our baseline models that trained on the psychotherapy corpus.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.