Using an online, medical image labeling app, 803 individuals rated images of skin lesions as either "melanoma" (skin cancer) or "nevus" (a skin mole). Each block consisted of 80 images. Blocks could have high (50%) or low (20%) target prevalence and could provide full, accurate feedback or no feedback. As in prior work, with feedback, decision criteria were more conservative at low prevalence than at high prevalence and resulted in more miss errors. Without feedback, this low prevalence effect was reversed (albeit, not significantly). Participants could participate in up to four different conditions a day on each of 6 days. Our main interest was in the effect of Block N on Block N + 1. Low prevalence with feedback made participants more conservative on a subsequent block. High prevalence with feedback made participants more liberal on a subsequent block. Conditions with no feedback had no significant impact on the subsequent block. The delay between Blocks 1 and 2 had no significant effect. The effect on the second half of Block 2 was just as large as on the first half. Medical expertise (over the range available in the study) had no impact on these effects, though medical students were better at the task than other groups. Overall, these seem to be robust effects where feedback may be 'teaching' participants how to respond in the future. This might have application in, for example, training or re-training situations.