Image retrieval is a prominent subject of study in the fields of image processing and computer vision. With its application in various domains, such as logo search, product search or general image search in Google, Bing, etc., image retrieval has received significant attention for many years. In this work, we study and investigate a framework that leverages visual transferring fea-tures and hashing algorithms for the purpose of finding similar images in a dataset. The key idea of our solution is to find the answer to the following question: “How can we convert an image into binary code and search it more efficiently in a large-scale dataset?”. To achieve this pur-pose, we use pretrained CNN models from ImageNet for image representation and then convert them into binary code by using hashing algorithms. These images in the dataset are represented by binary codes, and the Hamming distance is used to find the images in the dataset that are in-dexed. To demonstrate the robustness of the system, we systematically tested the performance of the system based on speed with raw indexing and hashing indexing on 4 datasets: CIFAR-10, Caltech-101, Oxford-102-Flowers, and MS-COCO 2017. The experimental results show that lo-cal sensitive hashing (LSH) algorithms with 2,048 bits in binary code demonstrate the same or greater precision than raw indexing. Furthermore, the findings show that the MobileNet architec-ture consistently outperforms other architectures across these datasets, effectively balancing speed and precision.