In order to specify the relationship between length of treatment and patient benefit, probit analysis was applied to 15 diverse sets of data frorn our own research and from research previously reported in the literature. These data were based on over 2,400 patients, covering a period of over 30 years of research. The probit model resulted in a good fit to these data, and the results were consistent across the various studies, allowing for a meta-analytic pooling that provided estimates of the expected benefits of specific "doses'" of psychotherapy. This analysis indicates that by 8 sessions approximately 50% of patients are measurably improved, and approximately 75% are improved by 26 sessions. Further analyses showed differential responsiveness for different diagnostic groups and for different outcome criteria. Implications for research and practice are discussed.
A fair test of the Dodo bird conjecture that different psychotherapies are equally effective would entail separate comparisons of every pair of therapies. A meta-analysis of overall effect size for any particular set of such pairs is only relevant to the Dodo bird conjecture when the mean absolute value of differences is 0. The limitations of the underlying randomized clinical trials and the problem of uncontrolled causal variables make clinically useful treatment differences unlikely to be revealed by such heterogeneous meta-analyses. To enhance implications for practice, the authors recommend an intensified focus on patient-treatment interactions, cost-effectiveness variables, and separate metaanalyses for each pair of treatments. Wampold et al. (1997) examined studies that directly compared "bona fide" treatments [i.e., treatments that "were based on psychological principles, were offered to the psychotherapy community as viable treatments (e.g., through professional books or manuals)," and "were delivered by trained therapists" (p. 205; with at least a master's degree)] to patients with bona fide clinical problems. The results of their analyses are consistent with those of prior meta-analyses, and proponents of psychotherapy can be reassured by the convergence of their findings. For example, Lipsey and Wilson (1993) examined 156 meta-analyses in which treatments were compared with control conditions. They calculated a mean effect size of .47, which was considerably larger than the mean effect size of many widely used, "validated" medical interventions. Grissom (1996) calculated "probability of superiority estimates" (cf. Howard, Krause, & Vessey, 1994) from prior meta-analyses. His analysis indicated that, in general, therapy was much better than no treatment and better than a placebo and that the median probability of superiority for studies comparing two therapies was only slightly greater than 50-50. So Wampold et al.'s meta-analysis is in a tradition of results indicating that efficacy differences between psychotherapeutic treatments are, on the average, modest to small. How to Compare TreatmentsIf we look for sheer differences in outcome among psychotherapies to see whether they are all the same in terms of the
Random assignment of patients to comparison groups stochastically tends, with increasing sample size or number of experiment replications, to minimize the confounding of treatment outcome differences by the effects of differences among these groups in unknown/unmeasured patient characteristics. To what degree such confounding is actually avoided we cannot know unless we have validly measured these patient variables, but completely avoiding it is quite unlikely. Even if this confounding were completely avoided, confounding by unmeasured Patient Variable x Treatment Variable interactions remains a possibility. And the causal power of the confounding variables is no less important for internal validity than the degree of confounding.
No matter how careful the design, the unavailability of certain data is a fact of life in every psychotherapy research project. This loss of data takes place at many points during a study, before and after patients meet inclusion criteria. There have been a variety of methods put forth to compensate for the effects of this data attrition, but these all rest on the untenable assumption that "attritors" and "completers" are equivalent samples of the same patient population. An analysis of the attrition dilemma and a
The variance in outcomes for psychotherapy patientsis not partitionable into components that are independent contributions of treatments, therapists, and patients. If these inputs did not influence one another over the course of psychotherapy, they could be independent and so have additive main-effects or interaction-effects on outcomes. But that is impossible because they do influence one another and therapists are responsible for actively managing the psychotherapy process by repeatedly adjusting these inputs toward optimally influencing one another. The consequent interdependence of these inputs within the therapy process needs to be reflected in the design and analysis of psychotherapy outcome studies, as it presently is not, if we are to learn who is adequate for treating whom, how, and why so.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.