Background: One of the most challenging practical and daily problems in intensive care medicine is the interpretation of the results from diagnostic tests. In neonatology and pediatric intensive care the early diagnosis of potentially life-threatening infections is a particularly important issue. Focus: A plethora of tests have been suggested to improve diagnostic decision making in the clinical setting of infection which is a clinical example used in this article. Several criteria that are critical to evidence-based appraisal of published data are often not adhered to during the study or in reporting. To enhance the critical appraisal on articles on diagnostic tests we discuss various measures of test accuracy: sensitivity, specificity, receiver operating characteristic curves, positive and negative predictive values, likelihood ratios, pretest probability, posttest probability, and diagnostic odds ratio. Conclusions: We suggest the following minimal requirements for reporting on the diagnostic accuracy of tests: a plot of the raw data, multilevel likelihood ratios, the area under the receiver operating characteristic curve, and the cutoff yielding the highest discriminative ability. For critical appraisal it is mandatory to report confidence intervals for each of these measures. Moreover, to allow comparison to the readers' patient population authors should provide data on study population characteristics, in particular on the spectrum of diseases and illness severity.
The daily risk for pneumonia decreases with increasing duration of stay in the intensive care unit. Witnessed aspiration and exposure to paralytic agents are potentially modifiable independent risk factors. Exposure to antibiotics was associated with low rates of early ventilator-associated pneumonia, but this effect attenuates over time.
Clinical decisions should be based on the totality of the best evidence and not the results of individual studies. When clinicians apply the results of a systematic review or meta-analysis to patient care, they should start by evaluating the credibility of the methods of the systematic review, ie, the extent to which these methods have likely protected against misleading results. Credibility depends on whether the review addressed a sensible clinical question; included an exhaustive literature search; demonstrated reproducibility of the selection and assessment of studies; and presented results in a useful manner. For reviews that are sufficiently credible, clinicians must decide on the degree of confidence in the estimates that the evidence warrants (quality of evidence). Confidence depends on the risk of bias in the body of evidence; the precision and consistency of the results; whether the results directly apply to the patient of interest; and the likelihood of reporting bias. Shared decision making requires understanding of the estimates of magnitude of beneficial and harmful effects, and confidence in those estimates.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations鈥揷itations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.