We investigate in this paper the use of XML structure in multimedia retrieval, particularly in context-based image retrieval. We propose two methods to represent multimedia objects: the first one is based on an implicit use of textual and structural context of multimedia objects, whereas the second one is based on an explicit use of both sources. Experimental evaluation is carried out using the INEX MultimediaFragments Task 2006 and 2007. We show that there is a strong vocabulary relation between the query and the multimedia object representation, and that using XML structure improves significantly the effectiveness of multimedia retrieval.
In this paper, we are interested in multimedia XML document retrieval, whose aim is to find relevant document components (i.e XML elements) that focus on the user needs. We propose to represent multimedia elements using not only textual information, but also hierarchical structure. Indeed, an XML document can be represented as a tree, whose nodes correspond to XML elements. Thanks to this representation, an analogy between XML documents and ontologies can be established. Therefore, to quantify the participation degree of each node in the multimedia element representation, we propose two measures using the ontology hierarchy. Another part of our model consists of defining the best window of multimedia fragments to be returned to the user. Through the evaluation of our model on the INEX 2006 Multimedia Fragments Task, we show the importance of using the document structure in multimedia information retrieval.
Abstract. In this paper, we are interested in XML multimedia document retrieval, whose aim is to find relevant multimedia components (i.e. XML fragments containing another media than text) that focus on the user needs. The work described here is carried out with images, but can be extended to any other media. We propose an XML multimedia fragment retrieval approach based on two steps. In a first step, we search for relevant images and then we retrieve the best multimedia fragments corresponding to these images. Image retrieval is done using textual and structural information from ascendant, sibling and direct descendant nodes in the XML tree, while multimedia fragment retrieval is done by evaluating the score of ancestors of images retrieved in the first step. Experiments were done on the INEX 2006 and 2007 Multimedia Fragments task and show the interest of our method.
The increasing quantities of available medical resources have motivated the development of effective search tools and medical decision support systems. Medical image search tools help physicians in searching medical image datasets for diagnosing a disease or monitoring the stage of a disease given previous patient's image screenings. Image retrieval models are classified into three categories : contentbased (visual), textual and combined models. In most of previous work, a unique image retrieval model is applied for any user formulated query independently of what retrieval model best suits the information need behind the query. The main challenge in medical image retrieval is to cope the semantic gap between user information needs and retrieval models. In this paper, we propose a novel approach for finding correlations between medical query features and retrieval models based on association rule mining. We define new medical-dependent query features such as image modality and presence of specific medical image terminology and make use of existing generic query features such as query specificity, ambiguity and cohesiveness. The proposed query features are then exploited into association rule mining for discovering rules which correlate query features to visual, textual or combined image retrieval models. Based on the discovered rules, we propose to use an associative classifier that finds the best suitable rule with a maximum feature coverage for a new query. Experiments are performed on Image CLEF queries from 2008 to 2012 where we evaluate the impact of our proposed query features on the classification performance. Results show that combining our proposed specific and generic query features is effective for classifying queries. A comparative study between our classifier, CBA, Naïve Bayes, Bayes Net and decision trees showed that our best coverage associative classifier outperforms existing classifiers where it achieves an improvement of 30%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.