BackgroundVerbal autopsy methods are critically important for evaluating the leading causes of death in populations without adequate vital registration systems. With a myriad of analytical and data collection approaches, it is essential to create a high quality validation dataset from different populations to evaluate comparative method performance and make recommendations for future verbal autopsy implementation. This study was undertaken to compile a set of strictly defined gold standard deaths for which verbal autopsies were collected to validate the accuracy of different methods of verbal autopsy cause of death assignment.MethodsData collection was implemented in six sites in four countries: Andhra Pradesh, India; Bohol, Philippines; Dar es Salaam, Tanzania; Mexico City, Mexico; Pemba Island, Tanzania; and Uttar Pradesh, India. The Population Health Metrics Research Consortium (PHMRC) developed stringent diagnostic criteria including laboratory, pathology, and medical imaging findings to identify gold standard deaths in health facilities as well as an enhanced verbal autopsy instrument based on World Health Organization (WHO) standards. A cause list was constructed based on the WHO Global Burden of Disease estimates of the leading causes of death, potential to identify unique signs and symptoms, and the likely existence of sufficient medical technology to ascertain gold standard cases. Blinded verbal autopsies were collected on all gold standard deaths.ResultsOver 12,000 verbal autopsies on deaths with gold standard diagnoses were collected (7,836 adults, 2,075 children, 1,629 neonates, and 1,002 stillbirths). Difficulties in finding sufficient cases to meet gold standard criteria as well as problems with misclassification for certain causes meant that the target list of causes for analysis was reduced to 34 for adults, 21 for children, and 10 for neonates, excluding stillbirths. To ensure strict independence for the validation of methods and assessment of comparative performance, 500 test-train datasets were created from the universe of cases, covering a range of cause-specific compositions.ConclusionsThis unique, robust validation dataset will allow scholars to evaluate the performance of different verbal autopsy analytic methods as well as instrument design. This dataset can be used to inform the implementation of verbal autopsies to more reliably ascertain cause of death in national health information systems.
In a systematic review and meta-analysis, Kazem Rahimi and colleagues examine the burden of heart failure in low- and middle-income countries. Please see later in the article for the Editors' Summary
BackgroundMonitoring progress with disease and injury reduction in many populations will require widespread use of verbal autopsy (VA). Multiple methods have been developed for assigning cause of death from a VA but their application is restricted by uncertainty about their reliability.MethodsWe investigated the validity of five automated VA methods for assigning cause of death: InterVA-4, Random Forest (RF), Simplified Symptom Pattern (SSP), Tariff method (Tariff), and King-Lu (KL), in addition to physician review of VA forms (PCVA), based on 12,535 cases from diverse populations for which the true cause of death had been reliably established. For adults, children, neonates and stillbirths, performance was assessed separately for individuals using sensitivity, specificity, Kappa, and chance-corrected concordance (CCC) and for populations using cause specific mortality fraction (CSMF) accuracy, with and without additional diagnostic information from prior contact with health services. A total of 500 train-test splits were used to ensure that results are robust to variation in the underlying cause of death distribution.ResultsThree automated diagnostic methods, Tariff, SSP, and RF, but not InterVA-4, performed better than physician review in all age groups, study sites, and for the majority of causes of death studied. For adults, CSMF accuracy ranged from 0.764 to 0.770, compared with 0.680 for PCVA and 0.625 for InterVA; CCC varied from 49.2% to 54.1%, compared with 42.2% for PCVA, and 23.8% for InterVA. For children, CSMF accuracy was 0.783 for Tariff, 0.678 for PCVA, and 0.520 for InterVA; CCC was 52.5% for Tariff, 44.5% for PCVA, and 30.3% for InterVA. For neonates, CSMF accuracy was 0.817 for Tariff, 0.719 for PCVA, and 0.629 for InterVA; CCC varied from 47.3% to 50.3% for the three automated methods, 29.3% for PCVA, and 19.4% for InterVA. The method with the highest sensitivity for a specific cause varied by cause.ConclusionsPhysician review of verbal autopsy questionnaires is less accurate than automated methods in determining both individual and population causes of death. Overall, Tariff performs as well or better than other methods and should be widely applied in routine mortality surveillance systems with poor cause of death certification practices.
BackgroundReliable data on the distribution of causes of death (COD) in a population are fundamental to good public health practice. In the absence of comprehensive medical certification of deaths, the only feasible way to collect essential mortality data is verbal autopsy (VA). The Tariff Method was developed by the Population Health Metrics Research Consortium (PHMRC) to ascertain COD from VA information. Given its potential for improving information about COD, there is interest in refining the method. We describe the further development of the Tariff Method.MethodsThis study uses data from the PHMRC and the National Health and Medical Research Council (NHMRC) of Australia studies. Gold standard clinical diagnostic criteria for hospital deaths were specified for a target cause list. VAs were collected from families using the PHMRC verbal autopsy instrument including health care experience (HCE). The original Tariff Method (Tariff 1.0) was trained using the validated PHMRC database for which VAs had been collected for deaths with hospital records fulfilling the gold standard criteria (validated VAs). In this study, the performance of Tariff 1.0 was tested using VAs from household surveys (community VAs) collected for the PHMRC and NHMRC studies. We then corrected the model to account for the previous observed biases of the model, and Tariff 2.0 was developed. The performance of Tariff 2.0 was measured at individual and population levels using the validated PHMRC database.ResultsFor median chance-corrected concordance (CCC) and mean cause-specific mortality fraction (CSMF) accuracy, and for each of three modules with and without HCE, Tariff 2.0 performs significantly better than the Tariff 1.0, especially in children and neonates. Improvement in CSMF accuracy with HCE was 2.5 %, 7.4 %, and 14.9 % for adults, children, and neonates, respectively, and for median CCC with HCE it was 6.0 %, 13.5 %, and 21.2 %, respectively. Similar levels of improvement are seen in analyses without HCE.ConclusionsTariff 2.0 addresses the main shortcomings of the application of the Tariff Method to analyze data from VAs in community settings. It provides an estimation of COD from VAs with better performance at the individual and population level than the previous version of this method, and it is publicly available for use.Electronic supplementary materialThe online version of this article (doi:10.1186/s12916-015-0527-9) contains supplementary material, which is available to authorized users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.