2018
DOI: 10.1007/s10791-018-9331-4
|View full text |Cite
|
Sign up to set email alerts
|

An analysis of evaluation campaigns in ad-hoc medical information retrieval: CLEF eHealth 2013 and 2014

Abstract: Since its inception in 2013, one of the key contributions of the CLEF eHealth evaluation campaign has been the organization of an ad-hoc information retrieval (IR) benchmarking task. This IR task evaluates systems intended to support laypeople searching for and understanding health information. Each year the task provides registered participants with standard IR test collections consisting of a document collection and topic set. Participants then return retrieval results obtained by their IR systems for each q… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 13 publications
(10 citation statements)
references
References 37 publications
0
10
0
Order By: Relevance
“…Examining the quality and stability of the lab contributions will help the CLEF eHealth series to better understand where it should be improved and how. As future work, we intend continuing our analyses of the influence of the CLEF eHealth evaluation series from the perspectives of publications and data/software releases [3,14,15].…”
Section: A Vision For Clef Ehealth Beyond 2020mentioning
confidence: 99%
See 1 more Smart Citation
“…Examining the quality and stability of the lab contributions will help the CLEF eHealth series to better understand where it should be improved and how. As future work, we intend continuing our analyses of the influence of the CLEF eHealth evaluation series from the perspectives of publications and data/software releases [3,14,15].…”
Section: A Vision For Clef Ehealth Beyond 2020mentioning
confidence: 99%
“…"Expressing an interest" for a CLEF task consists of filling in a form on the CLEF conference website with contact information, and tick boxes corresponding to the labs of interest. This is usually done several months before run submission, which explains the drop in the numbers 3. Some tasks have not presented a method ranking and/or statistical significance evaluation of this kind in the lab/task overviews.…”
mentioning
confidence: 99%
“…In 2013 and 2014 the focus of the information retrieval task was on evaluating the effectiveness of search engines to support people when searching for information about known conditions, for example, to answer queries like "thrombocytopenia treatment corticosteroids lengt", with multilingual queries added in the 2014 challenge [2,4,3]. This task aimed to model the scenario of a patient being discharged from hospital and wanting to seek more information about diagnosed conditions or prescribed treatments.…”
Section: Information Retrieval and Personalizationmentioning
confidence: 99%
“…Recently, a task concerned with improving systems designed to help laypeople seeking health information online was introduced in the ShARe/CLEF eHealth Evaluation Lab (Goeuriot et al, ). Instead of biomedical literature, participating systems were asked to retrieve relevant documents from a set of approved websites by the Health On the Net (HON) Foundation—an organization that certifies those health‐related websites that meet specific reliability standards— and other hand‐picked trusted resources.…”
Section: Related Workmentioning
confidence: 99%