In an ideal world, deployed machine learning models will enhance our society. We hope that those models will provide unbiased and ethical decisions that will benefit everyone. However, this is not always the case; issues arise during the data preparation process throughout the steps leading to the models' deployment. The continued use of biased datasets and biased processes will adversely damage communities and increase the cost to fix the problem later. In this work, we walk through the decision making process that a researcher should consider before, during, and after a system deployment to understand the broader impacts of their research in the community. Throughout this paper, we discuss fairness, privacy, and ownership issues in the machine learning pipeline, assert the need for a responsible human-over -the-loop methodology to bring accountability into machine learning pipeline, and finally, reflect on the need to explore research agendas that have harmful societal impacts. We examine visual privacy research and draw lessons that can apply broadly to artificial intelligence. Our goal is to provide a systematic analysis of the machine learning pipeline for visual privacy and bias issues. With this pipeline, we hope to raise stakeholder (e.g., researchers, modelers, corporations) awareness as these issues propagate in the various machine learning phases.