One of the issues that can potentially affect the internal validity of interactive online experiments that recruit participants using crowdsourcing platforms is collusion: participants could act upon information shared through channels that are external to the experimental design. Using two experiments, I measure how prevalent collusion is among MTurk workers and whether collusion depends on experimental design choices. Despite having incentives to collude, I find no evidence that MTurk workers collude in the treatments that resembled the design of most other interactive online experiments. This suggests collusion is not a concern for data quality in typical interactive online experiments that recruit participants using crowdsourcing platforms. However, I find that approximately 3% of MTurk workers collude when the payoff of collusion is unusually high. Therefore, collusion should not be overlooked as a possible danger to data validity in interactive experiments that recruit participants using crowdsourcing platforms when participants have strong incentives to engage in such behavior.