Research suggests that most people struggle when asked to interpret the outcomes of diagnostic tests such as those presented as Bayesian inference problems. To help people interpret these difficult problems, we created a brief tutorial, requiring less than 10 minutes, that guided participants through the creation of an aid (either graph or table) based on an example inference problem and then showed the correct way to calculate the positive predictive value of the problem (i.e., likelihood that positive tests correctly indicate presence of condition). Approximately 70% of those in each training condition found the correct response on at least one problem in the format for which they were trained. Just under 55% of those in the control condition (i.e., no training) were able to find the correct response on at least one table or graph problem. We demonstrated a relationship between numeracy and performance on both problem formats, although we did not find evidence for a relationship between graph literacy and performance for either problem format. Potential improvements to and applications of the tutorial are discussed.
We propose that a mismatch in problem presentation and question structures may promote errors on Bayesian reasoning problems. In this task, people determine the likelihood that a positive test actually indicates the presence of a condition. Research has shown that people routinely fail to correctly identify this positive predictive value (PPV). We point out that the typical problem structure is likely to confuse reasoners by focusing on the incorrect reference class for answering this diagnostic question; instead, providing the anchor needed to address the diagnostic question about sensitivity (SEN). Results of two experiments are described in which participants answer diagnostic questions using problems presented with congruent or incongruent reference classes. Aligning reference classes eases both representational and computational difficulties, increasing the proportion who were consistently accurate to an unprecedented 93% on PPV questions, and 69% on SEN questions. Analysis of response components from incongruent problems indicated that many errors reflect difficulties in identifying and applying appropriate values from the problem, which are prerequisite processes that contribute to computational errors. We conclude with a discussion of the need, especially in applied settings and on initial exposure, to adopt problem presentations to guide, rather than confuse, the organization and use of diagnostic information.
Although often overlooked, a straightforward mapping of reference classes from the relevant diagnostic information to the question of interest reduces confusion and substantially increases accuracy in estimates of diagnostic values. These findings can be used to strengthen training in the assessment of uncertainties associated with medical test results.
Shared decision making places an emphasis on patient understanding and engagement. However, when it comes to treatment selection, research tends to focus on how doctors select pharmaceutical treatments. The current study is a qualitative assessment of how patients choose among three common treatments that have varying degrees of scientific support and side effects. We used qualitative data from 157 undergraduates (44 males, 113 females; mean age = 21.89 years) that was collected as part of a larger correlational study of depression and critical thinking skills. Qualitative analysis revealed three major themes: shared versus independent decision making, confidence in the research and the drug, and cost and availability. Some participants preferred to rely on informal networks such as consumer testimonials while others expressed a false sense of security for over-the-counter treatments because they believe the drugs are regulated. Many indicated that they avoid seeking mental health services because of the time and money needed. The results indicate several factors influence selection of common depression treatments. Young adults indicate that when reading prescription information, they most often rely on perceptions including ease of access, price, and beliefs about drug regulations. General guidelines for treatment descriptions were created based on the qualitative analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.