Proceedings of the 31st ACM International Conference on Information &Amp; Knowledge Management 2022
DOI: 10.1145/3511808.3557714
|View full text |Cite
|
Sign up to set email alerts
|

TripJudge

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 18 publications
0
7
0
Order By: Relevance
“…In order to strengthen the reasoning behind the hypothesized system rankings, we evaluate them with the help of editorial relevance judgments. For this purpose, we use the previously mentioned TripJudge relevance labels [2]. The results in Figure 3 show that the system-oriented experiment gives evidence to the hypothesized relative orderings of the system performance.…”
Section: 1)mentioning
confidence: 93%
See 2 more Smart Citations
“…In order to strengthen the reasoning behind the hypothesized system rankings, we evaluate them with the help of editorial relevance judgments. For this purpose, we use the previously mentioned TripJudge relevance labels [2]. The results in Figure 3 show that the system-oriented experiment gives evidence to the hypothesized relative orderings of the system performance.…”
Section: 1)mentioning
confidence: 93%
“…By increasing , we deteriorate the ranking results in a systematic but also more subtle way, which better simulates incremental and less invasive changes to an existing search platform in an online experiment. 2 The resulting IRM ranking is defined by…”
Section: Experimental Systemsmentioning
confidence: 99%
See 1 more Smart Citation
“…In this paper, (1) we investigate how the amount of labelled data used for fine-tuning a PLM ranker impacts its effectiveness, (2) we adapt active learning (AL) strategies to the task of training PLM rankers, (3) we propose a budget-aware evaluation schema including aspects of annotation and computation cost, (4) we conduct an extensive analysis of AL strategies for training PLM rankers investigating the trade-offs between effectiveness, annotation budget and computational budget. We do this in the context of three common PLM ranker architectures: cross-encoders (MonoBERT [44]), single representation bi-encoders (DPR [33]) and multi-representation bi-encoders (ColBERT [34]), and two scenarios: ➊ Scratch: the PLM is pre-trained on a background corpus, but has yet to be fine-tuned to the target ranking task and dataset; ➋ Re-Train: domain adaptation of the PLM ranker is performed.…”
Section: Introductionmentioning
confidence: 95%
“…Data annotation typically requires a large manual effort and thus is expensive, especially in domain-specific tasks where annotators should be domain experts. In real-life settings, annotation and computational budget 1 is often limited, especially for start-ups or in domain-specific contexts.…”
Section: Introductionmentioning
confidence: 99%