Ground‐penetrating radar (GPR) is widely used to determine the location of buried pipes without excavation, and machine learning has been researched to automatically identify the location of buried pipes from the reflected wave images obtained by GPR. In object detection using machine learning, the accuracy of detection is affected by the quantity and quality of training data, so it is important to expand the training data to improve accuracy. This is especially true in the case of buried pipes that are located underground and whose existence cannot be easily confirmed. Therefore, this study developed a method for increasing training data using you only look once v5 (YOLOv5) and StyleGAN2‐ADA to automate the annotation process. Of particular importance is developing a framework for generating images by generative adversarial networks with an emphasis on images that are challenging to detect buried pipes in YOLOv5 and add them to a training dataset to repeat training recursively, which has greatly improved the detection accuracy. Specifically, F‐values of 0.915, 0.916, and 0.924 were achieved by automatically generating training images step by step from only 500, 1000, and 2000 training images, respectively. These values exceed the F‐value of 0.900, which is obtained from training by manually annotating 15,000 images, a much larger number. In addition, we applied the method to a road in Shizuoka Prefecture, Japan, and confirmed that the method can detect the location of buried pipes with high accuracy on a real road. This method can contribute to labor‐saving training data expansion, which is time‐consuming and costly in practice, and as a result, the method contributes to improving detection accuracy.