Deep learning has achieved promising results in pavement distress detection. However, the training model's effectiveness varies according to the data and scenarios acquired by different camera types and their installation positions. It is time consuming and labor intensive to recollect labeled data and retrain a new model every time the scene changes. In this paper, we propose a transfer learning pipeline to address this problem, which enables a distress detection model to be applied to other untrained scenarios. The framework consists of two main components: data transfer and model transfer. The former trains a generative adversarial network to transfer existing image data into a new scene style. Then, attentive CutMix and image melding are applied to insert distress annotations to synthesize the new scene's labeled data. After data expansion, the latter step transfers the feature extracted by the existing model to the detection application of the new scene through domain adaptation. The effects of varying degrees of knowledge transfer are also discussed. The proposed method is evaluated on two data sets from two different scenes with more than 40,000 images totally. This method can reduce the demand for training data by at least 25% when the model is applied in a new scene. With the same number of training images, the proposed method can improve the model accuracy by 26.55%.