The training of supervised learning requires the use of ground truth, which is difficult to obtain in large quantities in production practice. Unsupervised learning requires only flawless and anomalous images of fabrics, but inevitably generates a great deal of background noise when performing result generation, which reduces the quality of results. To overcome these limitations, we propose a new approach: image restoration fabric defect detection based on the dual generative adversarial network patch model (DGPM). We train with a modified generative adversarial network using only flawless and anomalous images of the fabric. We propose the patch model to directly obtain specific information about fabric defects and add a self-attentive model to reduce the generation of background noise. The performance of the DGPM is evaluated on box-, star-, and dot-patterned fabric databases. The true positive rate (TPR) of the box type is 81.56% and the f-measure is 62.69%, the TPR of the dot type is 83.72% and the f-measure is 67.33%, and the TPR of the star type is 79.79% and the f-measure is 64.65%.