2019
DOI: 10.1007/978-3-030-15712-8_39
|View full text |Cite
|
Sign up to set email alerts
|

Meta-evaluation of Dynamic Search: How Do Metrics Capture Topical Relevance, Diversity and User Effort?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 22 publications
0
3
0
Order By: Relevance
“…To compute compatibility we created ideal rankings by sorting the relevant documents by length, with shorter documents receiving higher ranks. If two parameter distribution range b uniform (0.0, 1.0) k 1 log uniform (0.01, 1000.0) depth (m) log uniform (8, 32) expansions (n) log uniform (4,32) mixing (γ ) uniform (0.0, 1.0) Table 2: Distributions for random BM25 and pseudorelevance feedback parameters.…”
Section: Parameter Tuningmentioning
confidence: 99%
See 2 more Smart Citations
“…To compute compatibility we created ideal rankings by sorting the relevant documents by length, with shorter documents receiving higher ranks. If two parameter distribution range b uniform (0.0, 1.0) k 1 log uniform (0.01, 1000.0) depth (m) log uniform (8, 32) expansions (n) log uniform (4,32) mixing (γ ) uniform (0.0, 1.0) Table 2: Distributions for random BM25 and pseudorelevance feedback parameters.…”
Section: Parameter Tuningmentioning
confidence: 99%
“…In a recent comparison, Albahem et al [4] demonstrate that this cube filling model outperforms ERR-IA and other diversity measures on properties related to intuitiveness and searcher effort. To generate an (approximately) ideal ranking based on this model, we apply it in a greedy fashion.…”
Section: An Ideal Ranking For Diversificationmentioning
confidence: 99%
See 1 more Smart Citation