Abstract:Abstract. In this paper we describe ImageCLEF 1 , the cross language image retrieval track of the Cross Language Evaluation Forum (CLEF 3 ). We instigated and ran a pilot experiment in 2003 where participants submitted entries for an ad hoc bilingual image retrieval task on a collection of historic photographs from St. Andrews University Library. This was designed to simulate the situation in which users would express their search request in natural language but require visual documents in return. For 2004 we … Show more
“…As a benchmark project, ImageCLEF is more and more wellknown with its open data platform [12]. And the database used in our experiments is coming from ImageCLEF 2005.…”
Section: Experiments and Results Analysismentioning
In this paper, a multi-class classification system is developed for medical images. We have mainly explored ways to use different image features, and compared two classifiers: Principle Component Analysis (PCA) and Supporting Vector Machines (SVM) with RBF (radial basis functions) kernels. Experimental results showed that SVM with a combination of the middle-level blob feature and low-level features (down-scaled images and their texture maps) achieved the highest recognition accuracy. Using the 9000 given training images from ImageCLEF05, our proposed method has achieved a recognition rate of 88.9% in a simulation experiment. And according to the evaluation result from the ImageCLEF05 organizer, our method has achieved a recognition rate of 82% over its 1000 testing images.
“…As a benchmark project, ImageCLEF is more and more wellknown with its open data platform [12]. And the database used in our experiments is coming from ImageCLEF 2005.…”
Section: Experiments and Results Analysismentioning
In this paper, a multi-class classification system is developed for medical images. We have mainly explored ways to use different image features, and compared two classifiers: Principle Component Analysis (PCA) and Supporting Vector Machines (SVM) with RBF (radial basis functions) kernels. Experimental results showed that SVM with a combination of the middle-level blob feature and low-level features (down-scaled images and their texture maps) achieved the highest recognition accuracy. Using the 9000 given training images from ImageCLEF05, our proposed method has achieved a recognition rate of 88.9% in a simulation experiment. And according to the evaluation result from the ImageCLEF05 organizer, our method has achieved a recognition rate of 82% over its 1000 testing images.
“…the set of documents to be searched is known prior to retrieval, but the search requests are not Given these general categories (and others), topics were created by refinement based on attributes such as name of photographer, date and location. A list of topic titles can be found in [13]. These are typical of retrieval requests from picture archives where semantic knowledge is required in addition to the image itself to perform retrieval.…”
Section: Ad Hoc Retrieval From the St Andrews Collectionmentioning
confidence: 99%
“…This resulted in 30-35 images being chosen. One of the authors then used these images for query-by-example searches to find further images in the database resembling the query using feedback and the case notes and selected 26 of these for the final topic set (see [14], [13]). Similar to the ad hoc task, participants were free to use any method for retrieval, but were asked to identify their runs against three main query dimensions: with and without relevance feedback, visual vs. visual+text, and manual vs. automatic.…”
Section: Medical Retrieval From Casimagementioning
Abstract. The purpose of this paper is to outline efforts from the 2004 CLEF cross-language image retrieval campaign (ImageCLEF). The aim of this CLEF track is to explore the use of both text and content-based retrieval methods for cross-language image retrieval. Three tasks were offered in the ImageCLEF track: a TREC-style ad-hoc retrieval task, retrieval from a medical collection, and a user-centered (interactive) evaluation task. Eighteen research groups from a variety of backgrounds and nationalities participated in ImageCLEF. In this paper we describe the ImageCLEF tasks, submissions from participating groups and summarise the main findings.
“…The St Andrew's documents are composed of images and text annotations as described in [3]. The search topics are similarly composed of a search image and text description.…”
Section: Text and Image Combination Runsmentioning
Abstract. For the CLEF 2004 ImageCLEF St Andrew's Collection task the Dublin City University group carried out three sets of experiments: standard cross-language information retrieval (CLIR) runs using topic translation via machine translation (MT), combination of this run with image matching results from the VIPER system, and a novel document rescoring approach based on automatic MT evaluation metrics. Our standard MT-based CLIR works well on this task. Encouragingly combination with image matching lists is also observed to produce small positive changes in the retrieval output. However, rescoring using the MT evaluation metrics in their current form significantly reduced retrieval effectiveness.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.