Retrieving medical images from a large inter-domain dataset requires multiple high efficiency processing models. These models include, but are not limited to, image classification, domain specific feature extraction & selection, ranking and post processing. A wide variety of system models have been designed to perform these tasks, but have limited accuracy, and retrieval performance due to improper cross-domain feature processing. In order to improve performance of cross-domain medical image retrieval systems, this text proposes a transfer learning mechanism, that learns from features of one domain, and applies the trained models to other domains. The proposed method uses a combination of VGGNet19, AlexNet, InceptionNet and Xception Net models for ensemble learning, along with wavelet and bag of features (WBoF) for efficient feature extraction. Each of the individual models were applied to different medical domains, and their retrieval accuracies were evaluated. Based on this evaluation, it is observed that VGGNet19 has better performance on computer tomography (CT) images, AlexNet model has better performance on Magnetic resonance imaging (MRI) images, InceptionNet model has better performance on positron emission tomography (PET) images, while Xception Net has better retrieval performance for ultrasound (USG) images. Using this observation, a highly efficient augmentation model is designed, which achieves an accuracy of 98.06%, precision of 65.9%, recall of 76.1%, and area under the curve (AUC) performance of 98.9% on different datasets. This performance is compared with a wide variety of medical image datasets including Center for Invivo Microscopy (CIVM), Embrionic and Neonatal Mouse (H&E, MR), LONI image data archive, The Open Access Series of Imaging Studies (OASIS), & CT scans for Colon Cancer (CSCC). It was observed that the proposed model outperforms most of the recent state-of-the-art models, and achieves consistent parametric results across multiple domain medical images.