2018
DOI: 10.1007/978-3-319-98932-7_31
|View full text |Cite
|
Sign up to set email alerts
|

Overview of the CLEF Dynamic Search Evaluation Lab 2018

Abstract: In this paper we provide an overview of the CLEF 2018 Dynamic Search Lab. The lab ran for the first time in 2017 as a workshop. The outcomes of the workshop were used to define the tasks of this year's evaluation lab. The lab strives to answer one key question: how can we evaluate, and consequently build, dynamic search algorithms? Unlike static search algorithms, which consider user request's independently, and consequently do not adapt their ranking with respect to the user's sequence of interactions and the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 17 publications
0
7
0
Order By: Relevance
“…Our experiments are conducted using two collections: CLEF technological assisted reviews (TAR) datasets [28][29][30] and systematic review collection with seed studies (Seed Collection) [79]. CLEF TAR dataset is published each year from 2017 to 2019 as a validation dataset for more effective systematic review Boolean query formulation and screening.…”
Section: Experimental Settingsmentioning
confidence: 99%
“…Our experiments are conducted using two collections: CLEF technological assisted reviews (TAR) datasets [28][29][30] and systematic review collection with seed studies (Seed Collection) [79]. CLEF TAR dataset is published each year from 2017 to 2019 as a validation dataset for more effective systematic review Boolean query formulation and screening.…”
Section: Experimental Settingsmentioning
confidence: 99%
“…The 2018 Lab focuses on the development of an evaluation framework, where participants submit "querying agents" that generate queries to be submitted to a static retrieval system. Effective "querying agents" can then simulate users towards developing dynamic search systems [10].…”
Section: The Clef Lab Sessionsmentioning
confidence: 99%
“…Methods to automate such review processes were evaluated in the scope of the Conference and Labs of Evaluation Forum (CLEF) with the so-called eHealth challenges regarding Technology Assisted Reviews (TAR) for systematic reviews (SR) in Empirical Medicine [10], in which relevant documents must be automatically retrieved for a given topic. The bestperforming method achieved an almost perfect overall recall of relevant documents, while the recall regarding the first search results and consequently the workload reduction could be optimized further [11].…”
Section: Introductionmentioning
confidence: 99%