This paper proposes a deep learning framework for decreasing large-scale domain shift problems in object detection using domain adaptation techniques. We have approached data-centric domain adaptation with Image-to-Image translation models for this problem. It is one of the methodologies that changes source data to target domain's style by reducing domain shift. However, the method cannot be applied directly to the domain adaptation task because the existing Image-to-Image model focuses on style translation. We solved this problem using the data-centric approach simply by reordering the training sequence of the domain adaptation model. We defined the features to be content and style. We hypothesized that object-specific information in images was more closely tied to the content than the style and thus experimented with methods to preserve content information before style was learned. We trained the model separately only by altering the training data. Our experiments confirmed that the proposed method improves the performance of the domain adaptation model and increases the effectiveness of using the generated synthetic data for training object detection models. We compared our approach with the existing single-stage method where content and style were trained simultaneously. We argue that our proposed method is more practical for training object detection models than others. The emphasis in this study is to preserve image content while changing the style of the image. In the future, we plan to conduct additional experiments to apply synthetic data generation technology to various other application areas like indoor scenes and bin picking.