Automotive paint defect detection plays a crucial role in the automotive production process. Current research on visual defect detection methods is mainly based on supervised learning, which requires a large number of labeled image samples for model training. The labeling work is not only time consuming but also expensive, seriously hindering the testing and application of these models in practice. To address this issue, this study proposes a new method for automotive paint defect detection based on a semi-supervised training strategy. First, a semi-supervised automotive paint defect detection framework, which can use labeled and unlabeled samples to reduce the cost of data labeling effectively, is presented. Then, a spatial pyramid pooling fast external attention module that introduces an external attention mechanism is proposed to improve the traditional YOLOv7 network structure, called YOLOv7-EA, to obtain good detection performance. This network acts as a detector to generate high-quality pseudo labels for the unlabeled samples, providing additional data to train the model; meanwhile, it performs the final detection task. Lastly, a Wise-intersection over union loss function that considers the quality of the anchor box is introduced to reduce the interference of low-quality samples and improve the convergence speed and detection accuracy of the model. Using this method, we can accomplish the task of automotive paint defect detection with a small number of labeled image samples. Experimental results on the automotive paint defect dataset show that mean average precision (mAp)@.5, mAp@.75, and mAp@.5:.95 are superior to other methods under the condition of 10% and 15% labeled data, achieving good defect detection performance.