This paper proposes a fabric defect detection algorithm based on the SA-Pix2pix network and transfer learning to address the issue of insufficient accuracy in detecting complex pattern fabric defects in scenarios with limited sample data. Its primary contribution lies in treating defects as disruptions to the fabric’s texture. It leverages a generative adversarial network to reconstruct defective images, restoring them to images of normal fabric texture. Subsequently, the reconstituted images are subjected to dissimilarity calculations against defective images, leading to image segmentation for the purpose of defect detection. This approach addresses the issues of poor defect image reconstruction accuracy due to the limited ability of remote dependency modeling within the generator’s convolutional neural network. It also tackles deficiencies in the generative adversarial network’s loss function in handling image details. To enhance the structure and loss function of the generative adversarial network, it introduces self-attention mechanisms, L1 loss, and an improved structural loss, thus mitigating the problems of low defect image reconstruction accuracy and insufficient image detail handling by the network. To counteract the issue of declining model training accuracy in the face of sparse complex fabric defect samples, a channel-wise domain transfer learning approach is introduced. This approach constrains the training of the target network through feature distribution, thereby overcoming the problem of target network overfitting caused by limited sample data. The study employs three methods to experimentally compare and investigate five distinct complex pattern fabric defects. The results demonstrate that, when compared to two other defect detection methods, the approach advocated in this paper exhibits superior detection accuracy in scenarios with limited sample data.