Much social marketing research is done on-line recruiting participants through Amazon Mechanical Turk, vetted panel vendors, social media, or community sources. When compensation is offered, care must be taken to distinguish genuine respondents from those with ulterior motives. We present a case study based on unanticipated empirical observations made while evaluating perceived effectiveness (PE) ratings of anti-tobacco public service announcements (PSAs) using facial expression (FE) analysis (pretesting).This study alerts social marketers to the risk and impact of disinterest or fraud in compensated on-line surveys. We introduce FE analysis to detect and remove bad data, improving the rigor and validity of on-line data collection. We also compare community (free) and vetted panel (fee added) recruitment in terms of usable samples. Methods: We recruited respondents through (Community) sources and through a well-known (Panel) vendor. Respondents completed a one-time, random block design Qualtrics® survey that collected PE ratings and recorded FE in response to PSAs. We used the AFFDEX ® feature of iMotions ® to calculate respondent attention and expressions; we also visually inspected respondent video records. Based on this quan/qual analysis, we divided 501 respondents (1503 observations) into three groups: (1) Those demonstrably watching PSAs before rating them (Valid), (2) those who were inattentive but completed the rating tasks (Disinterested), and (3) those employing various techniques to game the system (Deceitful). We used one-way analysis of variance (ANOVA) of attention (head positioning), engagement (all facial expressions), and specific facial expressions (FE) to test the likelihood a respondent fell into one of the three behavior groups. Results: PE ratings: The Community pool ( N = 92) was infiltrated by Deceitful actors (58%), but the remaining 42% was “attentive” (i.e., no disinterest). The Panel pool ( N = 409) included 11% deceitful and 2% disinterested respondents. Over half of the PSAs change rank order when deceitful responses are included in the Community sample. The smaller proportion of Deceitful and Disinterested (D&D) respondents in the Panel affected 2 (out of 12) videos. In both samples, the effect was to lower the PE ranking of more diverse and “locally made” PSAs. D&D responses clustered tightly to the mean values, believed to be an artefact of “professional” test taking behavior. FE analysis: The combined Valid sample was more attentive (87.2% of the time) compared to Disinterested (51%) or Deceitful (41%) (ANOVA F = 195.6, p < .001). Models using “engagement” and specific FEs (“cheek raise and smirk”) distinguished Valid from D&D responses. Recommendations: False PE pretesting scores waste social marketing budgets and could have disastrous results. Risk can be reduced by using vetted panels with a trade-off that community sources may produce more authentically interested respondents. Ways to make surveys more tamper-evident, with and without webcam recording, are provided as well as procedures to clean data. Check data before compensating respondents! Limitations: This was an accidental finding in a parent study. The study required computers which potentially biased the pool of survey respondents. The community pool is smaller than the panel group, limiting statistical power.