The increasing quantities of available medical resources have motivated the development of effective search tools and medical decision support systems. Medical image search tools help physicians in searching medical image datasets for diagnosing a disease or monitoring the stage of a disease given previous patient's image screenings. Image retrieval models are classified into three categories : contentbased (visual), textual and combined models. In most of previous work, a unique image retrieval model is applied for any user formulated query independently of what retrieval model best suits the information need behind the query. The main challenge in medical image retrieval is to cope the semantic gap between user information needs and retrieval models. In this paper, we propose a novel approach for finding correlations between medical query features and retrieval models based on association rule mining. We define new medical-dependent query features such as image modality and presence of specific medical image terminology and make use of existing generic query features such as query specificity, ambiguity and cohesiveness. The proposed query features are then exploited into association rule mining for discovering rules which correlate query features to visual, textual or combined image retrieval models. Based on the discovered rules, we propose to use an associative classifier that finds the best suitable rule with a maximum feature coverage for a new query. Experiments are performed on Image CLEF queries from 2008 to 2012 where we evaluate the impact of our proposed query features on the classification performance. Results show that combining our proposed specific and generic query features is effective for classifying queries. A comparative study between our classifier, CBA, Naïve Bayes, Bayes Net and decision trees showed that our best coverage associative classifier outperforms existing classifiers where it achieves an improvement of 30%.
One of the main challenges in medical image retrieval is the increasing volume of image data, which render it difficult for domain experts to find relevant information from large data sets. Effective and efficient medical image retrieval systems are required to better manage medical image information. Text‐based image retrieval (TBIR) was very successful in retrieving images with textual descriptions. Several TBIR approaches rely on models based on bag‐of‐words approaches, in which the image retrieval problem turns into one of standard text‐based information retrieval; where the meanings and values of specific medical entities in the text and metadata are ignored in the image representation and retrieval process. However, we believe that TBIR should extract specific medical entities and terms and then exploit these elements to achieve better image retrieval results. Therefore, we propose a novel reranking method based on medical‐image‐dependent features. These features are manually selected by a medical expert from imaging modalities and medical terminology. First, we represent queries and images using only medical‐image‐dependent features such as image modality and image scale. Second, we exploit the defined features in a new reranking method for medical image retrieval. Our motivation is the large influence of image modality in medical image retrieval and its impact on image‐relevance scores. To evaluate our approach, we performed a series of experiments on the medical ImageCLEF data sets from 2009 to 2013. The BM25 model, a language model, and an image‐relevance feedback model are used as baselines to evaluate our approach. The experimental results show that compared to the BM25 model, the proposed model significantly enhances image retrieval performance. We also compared our approach with other state‐of‐the‐art approaches and show that our approach performs comparably to those of the top three runs in the official ImageCLEF competition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.