This study investigates the effect of competitive project funding on researchers’ publication outputs. Using detailed information on applicants at the Swiss National Science Foundation and their proposal evaluations, we employ a case-control design that accounts for individual heterogeneity of researchers and selection into treatment (e.g. funding). We estimate the impact of the grant award on a set of output indicators measuring the creation of new research results (the number of peer-reviewed articles), its relevance (number of citations and relative citation ratios), as well as its accessibility and dissemination as measured by the publication of preprints and by altmetrics. The results show that the funding program facilitates the publication and dissemination of additional research amounting to about one additional article in each of the three years following the funding. The higher citation metrics and altmetrics by funded researchers suggest that impact goes beyond quantity and that funding fosters dissemination and quality.
ObjectivesTo examine whether the gender of applicants and peer reviewers and other factors influence peer review of grant proposals submitted to a national funding agency.SettingSwiss National Science Foundation (SNSF).DesignCross-sectional analysis of peer review reports submitted from 2009 to 2016 using linear mixed effects regression models adjusted for research topic, applicant’s age, nationality, affiliation and calendar period.ParticipantsExternal peer reviewers.Primary outcome measureOverall score on a scale from 1 (worst) to 6 (best).ResultsAnalyses included 38 250 reports on 12 294 grant applications from medicine, architecture, biology, chemistry, economics, engineering, geology, history, linguistics, mathematics, physics, psychology and sociology submitted by 26 829 unique peer reviewers. In univariable analysis, male applicants received more favourable evaluation scores than female applicants (+0.18 points; 95% CI 0.14 to 0.23), and male reviewers awarded higher scores than female reviewers (+0.11; 95% CI 0.08 to 0.15). Applicant-nominated reviewers awarded higher scores than reviewers nominated by the SNSF (+0.53; 95% CI 0.50 to 0.56), and reviewers from outside of Switzerland more favourable scores than reviewers affiliated with Swiss institutions (+0.53; 95% CI 0.49 to 0.56). In multivariable analysis, differences between male and female applicants were attenuated (+0.08; 95% CI 0.04 to 0.13) whereas results changed little for source of nomination and affiliation of reviewers. The gender difference increased after September 2011, when new evaluation forms were introduced (p=0.033 from test of interaction).ConclusionsPeer review of grant applications at SNSF might be prone to biases stemming from different applicant and reviewer characteristics. The SNSF abandoned the nomination of peer reviewers by applicants. The new form introduced in 2011 may inadvertently have given more emphasis to the applicant’s track record. We encourage other funders to conduct similar studies, in order to improve the evidence base for rational and fair research funding.
Clinical prediction models play a key role in risk stratification, therapy assignment and many other fields of medical decision making. Before they can enter clinical practice, their usefulness has to be demonstrated using systematic validation. Methods to assess their predictive performance have been proposed for continuous, binary, and time-to-event outcomes, but the literature on validation methods for discrete time-toevent models with competing risks is sparse. The present paper tries to fill this gap and proposes new methodology to quantify discrimination, calibration, and prediction error (PE) for discrete time-to-event outcomes in the presence of competing risks. In our case study, the goal was to predict the risk of ventilator-associated pneumonia (VAP) attributed to Pseudomonas aeruginosa in intensive care units (ICUs). Competing events are extubation, death, and VAP due to other bacteria. The aim of this application is to validate complex prediction models developed in previous work on more recently available validation data. K E Y W O R D Sarea under the curve, calibration slope, competing events, discrete time-to-event model, dynamic prediction models, prediction error, validation INTRODUCTIONClinical prediction models aim to give valid outcome predictions for new patients and to provide a good basis for treatment decisions. Such models need to be systematically validated before entering clinical practice. Assessing the predictive performance in the data set from which the model has been derived will most certainly give an assessment which is too optimistic. To avoid this issue, some kind of cross-validation or external validation is needed. In perfect conditions, the performance of the prediction model is assessed in a second independent data set (Steyerberg, 2009). Such validation data, also referred to as testing data, should incorporate new patients from a different time period or patients from a different center. To quantify how well the prediction model performs, commonly used measures of discrimination and calibration can be computed. A model has satisfactory discrimination if it is able to adequately discriminate between cases and controls. Moreover, a well-calibrated model guarantees good agreement between observed outcomes and predictions. Finally, to evaluate overall performance, quadratic scoring rules like the prediction error (PE) or Brier score (BS) can be calculated to simultaneously assess calibration and discrimination.This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
The development of clinical prediction models requires the selection of suitable predictor variables. Techniques to perform objective Bayesian variable selection in the linear model are well developed and have been extended to the generalized linear model setting as well as to the Cox proportional hazards model. Here, we consider discrete time-to-event data with competing risks and propose methodology to develop a clinical prediction model for the daily risk of acquiring a ventilator-associated pneumonia (VAP) attributed to P. aeruginosa (PA) in intensive care units. The competing events for a PA VAP are extubation, death, and VAP due to other bacteria. Baseline variables are potentially important to predict the outcome at the start of ventilation, but may lose some of their predictive power after a certain time. Therefore, we use a landmark approach for dynamic Bayesian variable selection where the set of relevant predictors depends on the time already spent at risk. We finally determine the direct impact of a variable on each competing event through cause-specific variable selection.
The trauma team activation criteria could be reduced to eight predictors without losing its predictive performance. Non-relevant parameters such as EMS provider judgement, endotracheal intubation, suspected paralysis, the presence of burned body surface of > 20% and suspected fractures of two proximal long bones could be excluded for full trauma team activation. The fact that the emergency physicians did a better job in reducing under-triage compared to our final triage model suggests that other variables not present in the S3 guideline may be relevant for prediction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.