Currently, fabric image retrieval faces challenges such as the high cost of image annotation and its vulnerability to adversarial perturbations. To minimize manual supervision and enhance the robustness of the retrieval system, this study proposes a robust deep image retrieval algorithm using multi-view self-supervised product quantization for artificially generated fabric images. The method introduces a multi-view module, which includes two views enhanced by AutoAugment, an adversarial view and a high-frequency view of the unlabeled images. AutoAugment can generate more varied data variations, which allows the model to learn more about the different features and structures of the fabric texture; fabric images are usually of high complexity and diversity, and adding the adversarial sample into the model training can add more noise and variations, which is one of the best existing ways to defend against adversarial attacks; the high-frequency component can make the edges, details, and contrasts in the fabric image clearer. A robust cross quantized contrastive loss function is also designed to jointly learn codewords and deep visual descriptors by comparing multiple views, effectively increasing the model's robustness and generalization. The method's effectiveness is demonstrated by experimental results on multiple datasets, which can significantly improve the robustness of the retrieval system compared to other state-of-the-art retrieval algorithms. Our method presents a new approach for fabric image retrieval and has great significance for improving its performance.