Deep Neural Networks (DNNs) have gained widespread popularity for tasks related to visual processing due to their superior performance and the wealth of images and videos available. Rich concept representation in the training dataset is crucial for the effectiveness of the trained deep neural network model. Often, images of the same object taken from slightly different angles or with minor variations are present, and this redundancy is wasteful as the bandwidth, storage, and processing power are limited. Near duplicate images contribute very little to the effectiveness of the model, and here we propose a novel framework for Visual Indexing and Retrieval-based image Deduplication (VIRD). VIRD effectively eliminates redundant data and maintains information quality in the training corpus through visual indexing and retrieval. VIRD balances the tradeoff between a large deduplication ratio and a stable mAP by adjusting the deduplication threshold for graph-based approximate retrieval of near-duplicate images from given target corpora. The effectiveness of VIRD is validated through extensive experiments on well-known Convolutional Neural Network (CNN) benchmarks. While preserving the same validation mean Average Precision (mAP), VIRD can reduce the corpus size by 25.13%. Moreover , by streamlining the training process, VIRD can lower the energy consumption of DNN training by 27.17%, leading to more practical and sustainable DNN training practices.