2020
DOI: 10.1016/j.ipm.2019.102109
|View full text |Cite
|
Sign up to set email alerts
|

Focal elements of neural information retrieval models. An outlook through a reproducibility study

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
7
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3
2

Relationship

1
9

Authors

Journals

citations
Cited by 19 publications
(8 citation statements)
references
References 17 publications
1
7
0
Order By: Relevance
“…We focus on systems based on WEs, leaving aside lexical ones, since our interest is to evaluate the impact of debiasing on classical performance measures. Our results, which are in line with prior art [49], show that debiasing (both regular and strong) produce negligible changes to average performance.…”
Section: Debiasing Moderatly Reduces Gsrsupporting
confidence: 89%
“…We focus on systems based on WEs, leaving aside lexical ones, since our interest is to evaluate the impact of debiasing on classical performance measures. Our results, which are in line with prior art [49], show that debiasing (both regular and strong) produce negligible changes to average performance.…”
Section: Debiasing Moderatly Reduces Gsrsupporting
confidence: 89%
“…Using reproducible -and thus trustworthy -statistical tools is crucial to drawing robust inferences and conclusions. In recent years, many fields have devoted a lot of effort to reproducing and generalizing their systems and algorithms [60,106,56,36]. Yet, the literature still lacks reproducibility studies on the statistical tools used to compare the performance of such systems and algorithms.…”
Section: Introductionmentioning
confidence: 99%
“…The feature selection problem is mitigated by deep learning and, more generally, neural approaches that have gained popularity in recent years. Despite these methods being extremely versatile and generally able to provide good overall effectiveness, it is known their performance is not always stable and may vary a lot across topics, for example the performance may improve for half of the topics while degrade for the other half [30]. A further disadvantage is that these neural approaches are very demanding in terms of computing resources and require enormous amounts of data which leads to larger and larger models that are not free from risks, as pointed out by Bender et al [6].…”
Section: Introductionmentioning
confidence: 99%