Online platforms such as Amazon's Mechanical Turk (MTurk) are increasingly used by researchers to collect survey and experimental data. Yet, such platforms often represent a tumultuous terrain for both researchers and reviewers. Researchers have to navigate the complexities of obtaining representative samples from online participant cohorts, ensuring data quality, ethically incentivizing participant engagement, and maintaining transparency. Reviewers, on the other hand, have to navigate the complexities of evaluating the efficacy of such data collection and execution efforts in answering important research questions. In order to provide clarity to these issues, this article provides researchers and reviewers with a series of recommendations for effectively executing and evaluating data collection via online platforms, respectively.