2020
DOI: 10.1007/978-3-030-45442-5_58
|View full text |Cite
|
Sign up to set email alerts
|

Neural-IR-Explorer: A Content-Focused Tool to Explore Neural Re-ranking Results

Abstract: In this paper we look beyond metrics-based evaluation of Information Retrieval systems, to explore the reasons behind ranking results. We present the content-focused Neural-IR-Explorer, which empowers users to browse through retrieval results and inspect the inner workings and fine-grained results of neural re-ranking models. The explorer includes a categorized overview of the available queries, as well as an individual query result view with various options to highlight semantic connections between query-docu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1

Relationship

4
3

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 12 publications
0
8
0
Order By: Relevance
“…They found the first passage to be a very strong indicator of overall relevance. Hofstätter et al [14] created the Neural IR-Explorer to help us better understand single word-interactions in the score aggregation of neural re-ranking models.…”
Section: Related Workmentioning
confidence: 99%
“…They found the first passage to be a very strong indicator of overall relevance. Hofstätter et al [14] created the Neural IR-Explorer to help us better understand single word-interactions in the score aggregation of neural re-ranking models.…”
Section: Related Workmentioning
confidence: 99%
“…The max-sum operator scans the matrix of all term-by-term interactions, which is a technique inspired by earlier works on kernelpooling [18,56]. The term-by-term interaction matrix creates transparency in the scoring, as it allows to inspect the source of different scoring parts, while being mapped to human-readable word units [19]. However, the usefulness of this feature is reduced by the use of special tokens, especially by the query expansion with MASK tokens, as it is non-trivial to explain reliably to the users what each MASK token stands for.…”
Section: Colbert Architecturementioning
confidence: 99%
“…We conducted the annotation campaign using the FiRA interface [16,17] and ran the campaign for 7 days with a fixed deadline 2 . To control the quality of the judgements during the campaign, we monitored the average number of judgements per 12 hours and we observe the daily average annotation time per relevance grade to detect random judgements.…”
Section: Annotation Campaignmentioning
confidence: 99%