Recently, deep hashing dominated single label image retrieval approaches. However, the complex nature of remote sensing images, which likely contains multi-labels, hardly benefits from the above approaches. To overcome single-label image retrieval limitations in remote sensing domain, we address this problem by proposing a multi-label remote sensing image retrieval (MLRSIR-NET) framework. Specifically, the proposed MLRSIR-NET composed of two main sub-networks: multi-level feature extraction and deep hash. The multi-level feature extraction network predicts multi-level features to exploit different levels of Convolution Neural Network (CNN (characteristics. To suppress discriminative feature representation, the multi-level features are aggregated and feed to Convolutional Block Attention Module (CBAM) to amplify the representation of relevant multi-label features. CBAM is flexibly integrated into our network with end-to-end training. The hash network stacked two fully connected layers aimed to learn multiple hashing functions to encode the feature vector into a compact hash code. Finally, we conduct experiments on two benchmarks for multi-label images: Dense Labelling Remote Sensing Dataset (DLRSD) and Wuhan Dense Labeling Dataset (WHDLD) to systematically assess the performance. The results show that the proposed framework improved the accuracy in terms of Mean Average Precision (MAP) by a considerable margin of 85.4%, 87.2%, 90.8% and 92.9% for 12-bit, 24-bit, 32-bit and 48-bit code lengths respectively on DLRSD. For WHDLD, it can be noted that the proposed framework supers the DCH by 93.8%, 98.7%, 91.9%, and 94.6% on average respectively.