There has been considerable interest in understanding what may have led to Uganda's dramatic decline in HIV prevalence, one of the world's earliest and most compelling AIDS prevention successes. Survey and other data suggest that a decline in multi-partner sexual behavior is the behavioral change most likely associated with HIV decline. It appears that behavior change programs, particularly involving extensive promotion of "zero grazing" (faithfulness and partner reduction), largely developed by the Ugandan government and local NGOs including faith-based, women's, people-living-with-AIDS and other community-based groups, contributed to the early declines in casual/multiple sexual partnerships and HIV incidence and, along with other factors including condom use, to the subsequent sharp decline in HIV prevalence. Yet the debate over "what happened in Uganda" continues, often involving divisive abstinence-versus-condoms rhetoric, which appears more related to the culture wars in the USA than to African social reality.
BackgroundTranslational research is a key area of focus of the National Institutes of Health (NIH), as demonstrated by the substantial investment in the Clinical and Translational Science Award (CTSA) program. The goal of the CTSA program is to accelerate the translation of discoveries from the bench to the bedside and into communities. Different classification systems have been used to capture the spectrum of basic to clinical to population health research, with substantial differences in the number of categories and their definitions. Evaluation of the effectiveness of the CTSA program and of translational research in general is hampered by the lack of rigor in these definitions and their application. This study adds rigor to the classification process by creating a checklist to evaluate publications across the translational spectrum and operationalizes these classifications by building machine learning-based text classifiers to categorize these publications.MethodsBased on collaboratively developed definitions, we created a detailed checklist for categories along the translational spectrum from T0 to T4. We applied the checklist to CTSA-linked publications to construct a set of coded publications for use in training machine learning-based text classifiers to classify publications within these categories. The training sets combined T1/T2 and T3/T4 categories due to low frequency of these publication types compared to the frequency of T0 publications. We then compared classifier performance across different algorithms and feature sets and applied the classifiers to all publications in PubMed indexed to CTSA grants. To validate the algorithm, we manually classified the articles with the top 100 scores from each classifier.ResultsThe definitions and checklist facilitated classification and resulted in good inter-rater reliability for coding publications for the training set. Very good performance was achieved for the classifiers as represented by the area under the receiver operating curves (AUC), with an AUC of 0.94 for the T0 classifier, 0.84 for T1/T2, and 0.92 for T3/T4.ConclusionsThe combination of definitions agreed upon by five CTSA hubs, a checklist that facilitates more uniform definition interpretation, and algorithms that perform well in classifying publications along the translational spectrum provide a basis for establishing and applying uniform definitions of translational research categories. The classification algorithms allow publication analyses that would not be feasible with manual classification, such as assessing the distribution and trends of publications across the CTSA network and comparing the categories of publications and their citations to assess knowledge transfer across the translational research spectrum.Electronic supplementary materialThe online version of this article (doi:10.1186/s12967-016-0992-8) contains supplementary material, which is available to authorized users.
Introduction This study uses KL2 scholars' publications to evaluate the types of research the KL2 program supports and to assess the initial productivity and impact of its scholars. Methods We illustrate the feasibility of three different approaches to bibliometrics, one viable method for determining the types of research a program or hub supports, and demonstrate how these data can be further combined with internal data records. Results Gender differences were observed in the types of research scholars undertake. Overall KL2 scholars are performing well, with their publications being cited more than the norm for NIH publications. Favorable results were also observed in scholars' continued engagement in research. Conclusion This study illustrates that linking bibliometric data and data categorizing publications along the translational spectrum with a CTSA hub's internal data records is feasible and offers a number of innovative possibilities for the evaluation of a CTSA hub's programs and investigators.
The success case studies approach examines in depth what works well in a program by describing cases and examining factors leading to successful outcomes. In this paper, we describe the use of success case studies as part of an evaluation of the transformation of a health sciences research support infrastructure. Using project-specific descriptions and the researchers’ perceptions of the impact of improved research infrastructure, we added depth of understanding to the quantitative data required by funding agencies. Each case study included an interview with the lead researcher, along with review of documents about the research, the investigator and their collaborators. Our analyses elucidated themes regarding contributions of the Clinical and Translational Science Awards(CTSA)program of the National Institutes of Health (NIH) to scientific achievements and career advancement of investigators in one academic institution.
The Clinical and Translational Science Award (CTSA) program is an ambitious multibillion dollar initiative sponsored by the National Institutes of Health (NIH) organized around the mission of facilitating the improved quality, efficiency, and effectiveness of translational health sciences research across the country. Although the NIH explicitly requires internal evaluation, funded CTSA institutions are given wide latitude to choose the structure and methods for evaluating their local CTSA program. The National Evaluators Survey was developed by a peer-led group of local CTSA evaluators as a voluntary effort to understand emerging differences and commonalities in evaluation teams and techniques across the 61 CTSA institutions funded nationwide. This article presents the results of the 2012 National Evaluators Survey, finding significant heterogeneity in evaluation staffing, organization, and methods across the 58 CTSAs institutions responding. The variety reflected in these findings represents both a liability and strength. A lack of standardization may impair the ability to make use of common metrics, but variation is also a successful evolutionary response to complexity. Additionally, the peer-led approach and simple design demonstrated by the questionnaire itself has value as an example of an evaluation technique with potential for replication in other areas across the CTSA institutions or any large-scale investment where multiple related teams across a wide geographic area are given the latitude to develop specialized approaches to fulfilling a common mission.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.