2022
DOI: 10.1186/s41235-022-00362-0
|View full text |Cite
|
Sign up to set email alerts
|

How one block of trials influences the next: persistent effects of disease prevalence and feedback on decisions about images of skin lesions in a large online study

Abstract: Using an online, medical image labeling app, 803 individuals rated images of skin lesions as either "melanoma" (skin cancer) or "nevus" (a skin mole). Each block consisted of 80 images. Blocks could have high (50%) or low (20%) target prevalence and could provide full, accurate feedback or no feedback. As in prior work, with feedback, decision criteria were more conservative at low prevalence than at high prevalence and resulted in more miss errors. Without feedback, this low prevalence effect was reversed (al… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(9 citation statements)
references
References 28 publications
0
9
0
Order By: Relevance
“…These results add to evidence that forensic science decision-making can be impacted by task-irrelevant extraneous factors and cognitive bias (see [ 41 ] for review) and provide further evidence that standard forensic training does not inoculate against base rate-induced biases, as forensic trainees and novices were equally susceptible to the low prevalence effect. As the low prevalence effect was observed in both trainees and novices, these results also add to growing evidence that expertise or experience does not necessarily inoculate decision-makers against the low prevalence effect–a bias that has been identified amongst other professionals, including TSA baggage screeners [ 18 ], security professionals [ 21 ], and doctors [ 28 ].…”
Section: Discussionmentioning
confidence: 57%
See 2 more Smart Citations
“…These results add to evidence that forensic science decision-making can be impacted by task-irrelevant extraneous factors and cognitive bias (see [ 41 ] for review) and provide further evidence that standard forensic training does not inoculate against base rate-induced biases, as forensic trainees and novices were equally susceptible to the low prevalence effect. As the low prevalence effect was observed in both trainees and novices, these results also add to growing evidence that expertise or experience does not necessarily inoculate decision-makers against the low prevalence effect–a bias that has been identified amongst other professionals, including TSA baggage screeners [ 18 ], security professionals [ 21 ], and doctors [ 28 ].…”
Section: Discussionmentioning
confidence: 57%
“…Although we cannot draw explicit conclusions about whether professional forensic examiners are also susceptible to the low prevalence effect, it is important to note that experience and training do not typically ameliorate this potential source of bias. For example, both medical students and fully-qualified doctors are equally susceptible to the low prevalence effect in detecting cancer lesions [ 28 ]. Many other studies investigating this effect also use trainee samples–for example, newly-trained TSA baggage screeners [ 18 ] or newly-trained security screeners [ 21 ].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…This suggests that individual radiologists have more consistent and systematic biases in this simulated tumor matching task compared to untrained observers, indicating that their expertise or experience is in fact reflected in this task. Although radiologists and untrained observers had similar sensitivity, as measured by JNDs, this is not surprising since previous studies have found that untrained, naïve observers can perform significantly better than chance in the Vanderbilt Chest Radiograph Test (Sunday et al, 2017), and other studies found that MDs are not always more sensitive than untrained participants in medical image perception tasks, and sometimes they even have lower sensitivity compared to less experienced observers (e.g., Wolfe, 2022). The similar sensitivity in MDs and untrained observers in our experiment could be due to a ceiling effect in our data, but the fact that the consistency in reports is higher for MDs suggests that they do, in a sense, perform the task better than untrained observers.…”
Section: Discussionmentioning
confidence: 62%
“…Serial dependence will therefore not show up in typical analyses because (1) responses are pooled or collapsed across blocks of trials and (2) sequential similarity is unknown or ignored. So, it is not surprising that serial dependence was not found in a previous study [39] because that study did not measure sequential stimulus similarity and it pooled trials together in blocks, washing out any serial dependence that may have been present. The results of the large data set here confirm that serial dependence is likely to be present in other similar data sets, such as [39].…”
Section: Discussionmentioning
confidence: 95%