The online retrieval of images related to clothes is a crucial task because finding the exact items like the query image from a large amount of data is extremely challenging. However, large variations in clothes images degrade the retrieval accuracy of visual searches. Another problem with retrieval accuracy is high dimensions of feature vectors obtained from pre-trained deep CNN models. This research is an effort to enhance the training and test accuracy of clothes retrieval by using two different means. Initially, features are extracted using the modified AlexNet (M-AlexNet) with little modification in which ReLU activation function is replaced with a self-regularized Mish activation function because of its non-monotonic nature. The M-AlexNet with Mish is trained on CIFAR-10 dataset using SoftMax classifier. Another contribution is to reduce the dimensions of feature vectors obtained from M-AlexNet. The dimensions of features are reduced by selecting the top k ranked features and removing some of the dissimilar features using the proposed Joint Shannon's Entropy Pearson Correlation Coefficient (JSE-PCC) technique to enhance the clothes retrieval performance. To calculate the efficacy of suggested methods, the comparison is performed with other deep CNN models such as baseline AlexNet, VGG-16, VGG-19, and ResNet50 on DeepFashion2, MVC, and the proposed Clothes Image Dataset (CID). Extensive experiments indicate that AlexNet with Mish attains 85.15%, 82.04%, and 83.65% accuracy on DeepFashion2, MVC, and 83.65% on CID datasets respectively. Hence, M-AlexNet and the proposed feature selection technique surpassed the results with a margin of 5.11% on DeepFashion2, 1.95% on MVC, and 3.51% CID datasets.