Dishonest and fraudulent behavior poses a serious threat to both individuals and societies. Many studies investigating dishonesty rely on (one of) a few well‐established lab and online cheating paradigms. Quite surprisingly, though, the external validity of these paradigms has only been investigated in a small number of studies, raising the question of whether behavior in these paradigms is related to real‐life dishonesty or, more broadly, socially questionable behavior. Tackling this gap, we link observed behavior in two widely used cheating paradigms to approval rates on two crowdworking platforms (namely, Prolific and Amazon Mechanical Turk) using data from four studies (overall N = 5,183). Results indicate that lower approval rates are associated with higher proportions of dishonest individuals. Importantly, this relation also holds for crowdworkers who exceed commonly used thresholds for study inclusion. The results thus support the external validity of (two widely used) cheating paradigms. Further, the study identifies approval rates as a variable that explains dishonesty on crowdworking platforms.