This article presents a methodological review of 54 meta-analyses of the effectiveness of clinical psychological treatments, using standardized mean differences as the effect size index. We statistically analyzed the distribution of the number of studies of the meta-analyses, the distribution of the sample sizes in the studies of each meta-analysis, the distribution of the effect sizes in each of the meta-analyses, the distribution of the between-studies variance values, and the Pearson correlations between effect size and sample size in each meta-analysis. The results are presented as a function of the type of standardized mean difference: posttest standardized mean difference, standardized mean change from pretest to posttest, and standardized mean change difference between groups. These findings will help researchers design future Monte Carlo and theoretical studies on the performance of meta-analytic procedures, based on the manipulation of realistic model assumptions and parameters of the meta-analyses. Furthermore, the analysis of the distribution of the mean effect sizes through the meta-analyses provides a specific guide for the interpretation of the clinical significance of the different types of standardized mean differences within the field of the evaluation of clinical psychological interventions.
The Best Possible Self (BPS) exercise promotes a positive view of oneself in the best possible future, after working hard towards it. Since the first work that attempted to examine the benefits of this intervention in 2001, studies on the BPS have grown exponentially and, currently, this is one of the most widely used Positive Psychology Interventions. However, little is yet known about its overall effectiveness in increasing wellbeing outcomes. Thus, the aim of this meta-analysis is to shed light on this question. A systematic literature search was conducted, and 29 studies (in 26 articles) met the inclusion criteria of empirically testing the intervention and comparing it to a control condition. In addition, BPS was compared to gratitude interventions in some of the included studies. A total of 2,909 participants were involved in the analyses. The outcome measures were wellbeing, optimism, depressive symptoms, and positive and negative affect. Results showed that the BPS is an effective intervention to improve wellbeing (d+ = .325), optimism (d+ = .334) and positive affect (d+ = .511) comparing to controls. Small effect sizes were obtained for negative affect and depressive symptoms. Moderator analyses did not show statistically significant results for wellbeing, except for a trend towards significance in the age of the participants (years) and the magnitude of the intervention (total minutes of practice). In addition, the BPS was found to be more beneficial for positive and negative affect than gratitude interventions (d+ = .326 and d+ = .485, respectively). These results indicate that the BPS can be considered a valuable Positive Psychology Intervention to improve clients’ wellbeing, and it seems that it might be more effective for older participants and with shorter practices (measured as total minutes of practice).
Mixed-effects models can be used to examine the association between a categorical moderator and the magnitude of the effect size. Two approaches are available to estimate the residual between-studies variance, τ_res^2, namely separate estimation within each category of the moderator versus pooled estimation across all categories. We examine, by means of a Monte Carlo simulation study, both approaches for τ_res^2 estimation in combination with two methods to test the statistical significance of the moderator, namely the Wald-type χ^2 and F tests. Results suggest that the F test using a pooled estimate of τ_res^2 across categories is the best option in most conditions, although the F test using separate estimates of τ_res^2 is preferable if the residual heterogeneity variances are heteroscedastic.
Reliability generalization (RG) is a meta‐analytic approach that aims to characterize how reliability estimates from the same test vary across different applications of the instrument. With this purpose RG meta‐analyses typically focus on a particular test and intend to obtain an overall reliability of test scores and to investigate how the composition and variability of the samples affect reliability. Although several guidelines have been proposed in the meta‐analytic literature to help authors improve the reporting quality of meta‐analyses, none of them were devised for RG meta‐analyses. The purpose of this investigation was to develop REGEMA (REliability GEneralization Meta‐Analysis), a 30‐item checklist (plus a flow chart) adapted to the specific issues that the reporting of an RG meta‐analysis must take into account. Based on previous checklists and guidelines proposed in the meta‐analytic arena, a first version was elaborated by applying the nominal group methodology. The resulting instrument was submitted to a list of independent meta‐analysis experts and, after discussion, the final version of the REGEMA checklist was reached. In a pilot study, four pairs of coders applied REGEMA to a random sample of 40 RG meta‐analyses in Psychology, and results showed satisfactory inter‐coder reliability. REGEMA can be used by: (a) meta‐analysts conducting or reporting an RG meta‐analysis and aiming to improve its reporting quality; (b) consumers of RG meta‐analyses who want to make informed critical appraisals of their reporting quality, and (c) reviewers and editors of journals who are considering submissions where an RG meta‐analysis was reported for potential publication.
Meta-analysis is a powerful and important tool to synthesize the literature about a research topic. Like other kinds of research, meta-analyses must be reproducible to be compliant with the principles of the scientific method. Furthermore, reproducible meta-analyses can be easily updated with new data and reanalysed applying new and more refined analysis techniques. We attempted to empirically assess the prevalence of transparency and reproducibility-related reporting practices in published meta-analyses from clinical psychology by examining a random sample of 100 meta-analyses. Our purpose was to identify the key points that could be improved, with the aim of providing some recommendations for carrying out reproducible meta-analyses. We conducted a meta-review of meta-analyses of psychological interventions published between 2000 and 2020. We searched PubMed, PsycInfo and Web of Science databases. A structured coding form to assess transparency indicators was created based on previous studies and existing meta-analysis guidelines. We found major issues concerning: completely reproducible search procedures report, specification of the exact method to compute effect sizes, choice of weighting factors and estimators, lack of availability of the raw statistics used to compute the effect size and of interoperability of available data, and practically total absence of analysis script code sharing. Based on our findings, we conclude with recommendations intended to improve the transparency, openness, and reproducibility-related reporting practices of meta-analyses in clinical psychology and related areas.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.