ObjectiveTo evaluate evidence from published randomised controlled trials (RCTs) for the use of task-shifting strategies for cardiovascular disease (CVD) risk reduction in low-income and middle-income countries (LMICs).DesignSystematic review of RCTs that utilised a task-shifting strategy in the management of CVD in LMICs.Data SourcesWe searched the following databases for relevant RCTs: PubMed from the 1940s, EMBASE from 1974, Global Health from 1910, Ovid Health Star from 1966, Web of Knowledge from 1900, Scopus from 1823, CINAHL from 1937 and RCTs from ClinicalTrials.gov.Eligibility criteria for selecting studiesWe focused on RCTs published in English, but without publication year. We included RCTs in which the intervention used task shifting (non-physician healthcare workers involved in prescribing of medications, treatment and/or medical testing) and non-physician healthcare providers in the management of CV risk factors and diseases (hypertension, diabetes, hyperlipidaemia, stroke, coronary artery disease or heart failure), as well as RCTs that were conducted in LMICs. We excluded studies that are not RCTs.ResultsOf the 2771 articles identified, only three met the predefined criteria. All three trials were conducted in practice-based settings among patients with hypertension (2 studies) and diabetes (1 study), with one study also incorporating home visits. The duration of the studies ranged from 3 to 12 months, and the task-shifting strategies included provision of medication prescriptions by nurses, community health workers and pharmacists and telephone follow-up posthospital discharge. Both hypertension studies reported a significant mean blood pressure reduction (2/1 mm Hg and 30/15 mm Hg), and the diabetes trial reported a reduction in the glycated haemoglobin levels of 1.87%.ConclusionsThere is a dearth of evidence on the implementation of task-shifting strategies to reduce the burden of CVD in LMICs. Effective task-shifting interventions targeted at reducing the global CVD epidemic in LMICs are urgently needed.
BackgroundTranslational research is a key area of focus of the National Institutes of Health (NIH), as demonstrated by the substantial investment in the Clinical and Translational Science Award (CTSA) program. The goal of the CTSA program is to accelerate the translation of discoveries from the bench to the bedside and into communities. Different classification systems have been used to capture the spectrum of basic to clinical to population health research, with substantial differences in the number of categories and their definitions. Evaluation of the effectiveness of the CTSA program and of translational research in general is hampered by the lack of rigor in these definitions and their application. This study adds rigor to the classification process by creating a checklist to evaluate publications across the translational spectrum and operationalizes these classifications by building machine learning-based text classifiers to categorize these publications.MethodsBased on collaboratively developed definitions, we created a detailed checklist for categories along the translational spectrum from T0 to T4. We applied the checklist to CTSA-linked publications to construct a set of coded publications for use in training machine learning-based text classifiers to classify publications within these categories. The training sets combined T1/T2 and T3/T4 categories due to low frequency of these publication types compared to the frequency of T0 publications. We then compared classifier performance across different algorithms and feature sets and applied the classifiers to all publications in PubMed indexed to CTSA grants. To validate the algorithm, we manually classified the articles with the top 100 scores from each classifier.ResultsThe definitions and checklist facilitated classification and resulted in good inter-rater reliability for coding publications for the training set. Very good performance was achieved for the classifiers as represented by the area under the receiver operating curves (AUC), with an AUC of 0.94 for the T0 classifier, 0.84 for T1/T2, and 0.92 for T3/T4.ConclusionsThe combination of definitions agreed upon by five CTSA hubs, a checklist that facilitates more uniform definition interpretation, and algorithms that perform well in classifying publications along the translational spectrum provide a basis for establishing and applying uniform definitions of translational research categories. The classification algorithms allow publication analyses that would not be feasible with manual classification, such as assessing the distribution and trends of publications across the CTSA network and comparing the categories of publications and their citations to assess knowledge transfer across the translational research spectrum.Electronic supplementary materialThe online version of this article (doi:10.1186/s12967-016-0992-8) contains supplementary material, which is available to authorized users.
Bibliometrics is becoming increasingly prominent in the world of medical libraries. The number of presentations related to research impact at the Medical Library Association (MLA) annual meeting has been increasing in past years. Medical centers have been using institutional dashboards to track clinical performance for over a decade, and more recently, these institutional dashboards have included measures of academic performance. This commentary reviews current practices and considers the role for a newer metric, the relative citation ratio.
This paper previews the imminent flood of scientific data expected from the next generation of experiments, simulations, sensors and satellites. In order to be exploited by search engines and data mining software tools, such experimental data needs to be annotated with relevant metadata giving information as to provenance, content, conditions and so on. The need to automate the process of going from raw data to information to knowledge is briefly discussed. The paper argues the case for creating new types of digital libraries for scientific data with the same sort of management services as conventional digital libraries in addition to other data-specific services. Some likely implications of both the Open Archives Initiative and e-Science data for the future role for university libraries are briefly mentioned. A substantial subset of this e-Science data needs to archived and curated for long-term preservation. Some of the issues involved in the digital preservation of both scientific data and of the programs needed to interpret the data are reviewed. Finally, the implications of this wealth of e-Science data for the Grid middleware infrastructure are highlighted.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.