Content-based image medical retrieval (CBMIR) is a technique for retrieving medical images on the basis of automatically derived image features such as colour, texture and shape. There are many applications of CBMIR, such as teaching, research, diagnosis and electronic patient records. The retrieval performance of a CBMIR system depends mainly on the representation of image features, which researchers have studied extensively for decades. Although a number of methods and approaches have been suggested, it remains one of the most challenging problems in current (CBMIR) studies, largely due to the well-known ''semantic gap'' issue that exists between machine-captured low-level image features and human-perceived high-level semantic concepts. There have been many techniques proposed to bridge this gap. This study proposes a novel relevance feedback retrieval method (RFRM) for CBMIR. The feedback implemented here is based on voting values performed by each class in the image repository. Here, eighteen using colour moments and GLCM texture features were extracted to represent each image and eight common similarity coefficients were used as similarity measures. After briefly researching using a single random image query, the top images retrieved from each class are used as voters to select the most effective similarity coefficient that will be used for the final searching process. Our proposed method is implemented on the Kvasir dataset, which has 4,000 images divided into eight classes and was recently widely used for gastrointestinal disease detection. Intensive statistical analysis of the results shows that our proposed RFRM method has the best performance for enhancing both recall and precision when it uses any group of similarity coefficients. INDEX TERMS Content-based image retrieval, feature extraction, voting method, relevance feedback.