When we synthesize research findings via meta-analysis, it is common to assume that the true underlying effect differs across studies. Total variability consists of the within-study and between-study variances (heterogeneity). There have been established measures, such as I , to quantify the proportion of the total variation attributed to heterogeneity. There is a plethora of estimation methods available for estimating heterogeneity. The widely used DerSimonian and Laird estimation method has been challenged, but knowledge of the overall performance of heterogeneity estimators is incomplete. We identified 20 heterogeneity estimators in the literature and evaluated their performance in terms of mean absolute estimation error, coverage probability, and length of the confidence interval for the summary effect via a simulation study. Although previous simulation studies have suggested the Paule-Mandel estimator, it has not been compared with all the available estimators. For dichotomous outcomes, estimating heterogeneity through Markov chain Monte Carlo is a good choice if an informative prior distribution for heterogeneity is employed (eg, by published Cochrane reviews). Nonparametric bootstrap and positive DerSimonian and Laird perform well for all assessment criteria for both dichotomous and continuous outcomes. Hartung-Makambi estimator can be the best choice when the heterogeneity values are close to 0.07 for dichotomous outcomes and medium heterogeneity values (0.01 , 0.05) for continuous outcomes. Hence, there are heterogeneity estimators (nonparametric bootstrap DerSimonian and Laird and positive DerSimonian and Laird) that perform better than the suggested Paule-Mandel. Maximum likelihood provides the best performance for both types of outcome in the absence of heterogeneity.
Autism spectrum disorder (ASD) substantially contributes to the burden of mental disorders. Improved awareness and changes in diagnostic criteria of ASD may have influenced the diagnostic rates of ASD. However, while data on trends in diagnostic rates in some individual countries have been published, updated estimates of diagnostic rate trends and ASD-related disability at the global level are lacking. Here, we used the Global Burden of Diseases, Injuries, and Risk Factors Study data to address this gap, focusing on changes in prevalence, incidence, and disability-adjusted life years (DALYs) of ASD across the world. From 1990 to 2019, overall age-standardized estimates remained stable globally. Both prevalence and DALYs increased in countries with high socio-demographic index (SDI). However, the age-standardized incidence decreased in some low SDI countries, indicating a need to improve awareness. The male/female ratio decreased between 1990 and 2019, possibly accounted for by increasing clinical attention to ASD in females. Our results suggest that ASD detection in low SDI countries is suboptimal, and that ASD prevention/treatment in countries with high SDI should be improved considering the increasing prevalence of the disorder. Additionally, growing attention is being paid to ASD diagnosis in females, who might have been left behind by ASD epidemiologic and clinical research previously. ASD burden estimates are underestimated as GBD does not account for mortality in ASD.
Missing data result in less precise and possibly biased effect estimates in single studies. Bias arising from studies with incomplete outcome data is naturally propagated in a meta‐analysis. Conventional analysis using only individuals with available data is adequate when the meta‐analyst can be confident that the data are missing at random (MAR) in every study—that is, that the probability of missing data does not depend on unobserved variables, conditional on observed variables. Usually, such confidence is unjustified as participants may drop out due to lack of improvement or adverse effects. The MAR assumption cannot be tested, and a sensitivity analysis to assess how robust results are to reasonable deviations from the MAR assumption is important. Two methods may be used based on plausible alternative assumptions about the missing data. Firstly, the distribution of reasons for missing data may be used to impute the missing values. Secondly, the analyst may specify the magnitude and uncertainty of possible departures from the missing at random assumption, and these may be used to correct bias and reweight the studies. This is achieved by employing a pattern mixture model and describing how the outcome in the missing participants is related to the outcome in the completers. Ideally, this relationship is informed using expert opinion. The methods are illustrated in two examples with binary and continuous outcomes. We provide recommendations on what trial investigators and systematic reviewers should do to minimize the problem of missing outcome data in meta‐analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.