ObjectivesTo examine the generalizability of two large randomized controlled clinical trials of antiretroviral therapy in HIV-infected individuals. MethodsThe demographic, clinical and laboratory characteristics of HIV-infected participants in two antiretroviral trials (Concorde and Delta) at three study sites were compared with those of two other groups of patients to whom the trial results would be applicable: eligible patients who were screened for the trials but who did not enrol, and eligible patients who were not approached or screened for the trials. ResultsAmong enrolled participants in the Concorde and Delta trials there was an under-representation of patients who had acquired HIV infection heterosexually (P = 0.014) or through injecting drug use (P = 0.03), and a greater representation of homosexual men (P < 0.001) compared to non-enrolled participants. Trial participants in Concorde had signi®cantly less advanced immunosuppression compared to non-trial participants (P = 0.0001), while in Delta the converse was true. Concorde participants were also much less likely to be lost to follow-up for more than a year (9%) compared to eligible but unscreened patients (40%) (P < 0.001), and screened but unenrolled patients (22%) (P = 0.035). ConclusionsIn applying the ®ndings of large randomized clinical trials, it is important to establish whether there are systematic differences between the characteristics of trial participants and eligible non-participants, which might affect the generalizability of the study results. A log of the characteristics of enrolled as well as eligible but non-enrolled patients should be maintained so that the representativeness of the trial population can be evaluated.
The ability to accurately calibrate one’s confidence is essential for effective decisions. Overprecision is the excessive faith that one knows the truth. This paper proposes a new theory to explain it. The theory holds that overprecision in judgment results from individuals’ neglect of all the ways in which they could be wrong. When there are an infinite number of ways to be wrong, it is impossible to consider them all. Overprecision in judgment, it turns out, is primarily attributable to being wrong and not knowing it. This explanation can account for the persistence of overprecision not only among people but also artificially intelligent agents.
In this paper, we investigate the calibration of public election polls.We present a definition of poll accuracy based on whether the poll’s margin of error spans the true election outcome. Most polls provide a 95% confidence interval along with the poll results, we sought to find whether their accuracy is as high as their confidence lev-els claim. Furthermore, we also wanted to see how this accuracy evolves over time as polls are conducted closer to the actual election.We find that even a week away from the election, polls claiming95% confidence are only accurate 60% of the time. Moreover, we conclude that these polls would in fact need margins of error twice their reported width in order to be truly 95% confident. This pro-vides a unique insight into the adjustment polls need over time and quantifies the systemic error polls contain beyond what the traditional statistics captures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.