Content-based medical image retrieval (CBMIR), a specialized area within content-based image retrieval (CBIR), involves two main stages: feature extraction and retrieval ranking. The feature extraction stage is particularly crucial for developing an effective retrieval system with high performance. Lately, pre-trained deep convolutional neural networks (CNNs) have become the preferred tools for feature extraction due to their excellent performance and versatility, which includes the ability to be re-trained and adapt through transfer learning. Various pre-trained deep CNN models are employed as feature extraction tools in content-based medical image retrieval systems. Researchers have effectively used many such models either individually or in combined forms by merging feature vectors from several models. In this study, a method using multiple pre-trained deep CNNs for CBMIR is introduced, utilizing two popular models, ResNet-18 and GoogleNet, for extracting features. This method combines the feature vectors from both models in a way that selects the best model for each image based on the highest classification probability during training. The method's effectiveness is assessed using two well-known medical image datasets, Kvasir and PH2. The evaluation results show that the proposed method achieved average precision scores of 94.13% for Kvasir and 55.67% for PH2 at the top 10 cut-offs, surpassing some leading methods in this research area.