We analyze an insurance demand experiment conducted in two different settings: in-person at a university laboratory and online using a crowdworking platform. Subject demographics differ across the samples, but average insurance demand is similar. However, choice patterns suggest online subjects are less cognitively engaged-they have more variation in their demand and react less to changes in exogenous factors of the insurance situation. Applying data quality filters does not lead to more comparable demand patterns between the samples. Additionally, while online subjects pass comprehension questions at the same rate as in-person subjects, they show more random behavior in other questions. We find that online subjects are more likely to engage in "coarse thinking," choosing from a reduced set of options. Our results justify caution in using crowdsourced subjects for insurance demand experiments. We outline some best practices which may help improve data quality from experiments conducted via crowdworking platforms.