DOI: 10.1007/978-3-540-85760-0_61
|View full text |Cite
|
Sign up to set email alerts
|

MIRACLE at ImageCLEFphoto 2007: Evaluation of Merging Strategies for Multilingual and Multimedia Information Retrieval

Abstract: This paper describes the participation of MIRACLE research consortium at the ImageCLEF Photographic Retrieval task of ImageCLEF 2007. For this campaign, the main purpose of our experiments was to thoroughly study different merging strategies, i.e. methods of combination of textual and visual retrieval techniques. While we have applied all the well known techniques which we had already used in previous campaigns, for both textual and visual components of the system, our research has primarily focused on the ide… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
8
0
1

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 3 publications
0
8
0
1
Order By: Relevance
“…We succeeded in submitting 41 runs. Obtained results from text-based retrieval are better than content-based as previous experiments in the MIRACLE team campaigns [5,6] using different software. Our main aim was to experiment with several merging approaches to fuse text-based retrieval and content-based retrieval results, and it happened that we improve the text-based baseline when applying one of the three merging algorithms, although visual results are lower than textual ones.…”
mentioning
confidence: 66%
“…We succeeded in submitting 41 runs. Obtained results from text-based retrieval are better than content-based as previous experiments in the MIRACLE team campaigns [5,6] using different software. Our main aim was to experiment with several merging approaches to fuse text-based retrieval and content-based retrieval results, and it happened that we improve the text-based baseline when applying one of the three merging algorithms, although visual results are lower than textual ones.…”
mentioning
confidence: 66%
“…Unfortunately, research in that direction was hindered by the unavailability of suitable datasets and lexicons for system training, development, and testing. While some Twitter-specific resources were developed, initially they were either small and proprietary, such as the i-sieve corpus [27], were created only for Spanish like the TASS corpus [70], or relied on noisy labels obtained automatically, e.g., based on emoticons and hashtags [35,36,46]. This situation changed with the shared task on Sentiment Analysis on Twitter, which was organized at SemEval, the International Workshop on Semantic Evaluation, a semantic evaluation forum previously known as SensEval.…”
Section: Historical Backgroundmentioning
confidence: 99%
“…Some of them include the TASS workshop in the SEPLN conference (Villena-Román et al, 2013), the RepLab workshop in the CLEF conference (Amigó et al, 2012), and the Sentiment Analysis in Twitter task (Task 2) in the last SemEval workshop (Nakov et al, 2013).…”
Section: Introductionmentioning
confidence: 99%