This paper explores applying deep learning techniques for automatically recognizing and transcribing oracle bone and gold texts. We significantly enhance model recognition efficiency by leveraging the power of Generative Adversarial Networks (GANs) for image data enhancement and the Pix2Pix model for text repair. Our approach integrates the ResNet50 model for robust feature extraction with unsupervised domain adaptation, utilizing multiple pseudo labels to achieve efficient text recognition and transcription. We improve the model’s repair capabilities by generating hard-to-distinguish sample data through GANs and employing a U-Net-based text repair model enhanced with dense connectivity and spectral normalization. Further, combining ResNet50 for feature extraction and advanced domain adaptation techniques strengthens the model’s generalization. Our results on the Oracle dataset show an increase in recognition accuracy from 82% to 94.5%, highlighting the effectiveness of our image enhancement strategies. The ResNet50 extractor outperforms others across various Intersection over Union (IoU) metrics, establishing its feature extraction superiority. In real-world scenarios, testing with a combined Oracle and Jinwen dataset yields a recognition accuracy above 80%, demonstrating our model’s ability to effectively fulfill the recognition task. This research underscores the potential of deep learning algorithms in automating the recognition and transcription of ancient texts, offering a novel solution that significantly boosts recognition accuracy through a synergistic blend of image enhancement, feature extraction, and domain adaptation techniques.