2023
DOI: 10.1145/3576924
|View full text |Cite
|
Sign up to set email alerts
|

Extractive Explanations for Interpretable Text Ranking

Abstract: Neural document ranking models perform impressively well due to superior language understanding gained from pre-training tasks. However, due to their complexity and large number of parameters, these (typically transformer-based) models are often non-interpretable in that ranking decisions can not be clearly attributed to specific parts of the input documents. In this paper we propose ranking models that are inherently interpretable by generating explanations as a by-product of the predic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(2 citation statements)
references
References 85 publications
(111 reference statements)
0
2
0
Order By: Relevance
“…Previous XAI literature has contributed to explaining information retrieval systems, focusing on the interpretability of document-retrieval mechanisms [25,26,52]. For example, the authors of [52] propose a listwise explanation generator, which provides an explanation that covers all the documents contained in the page (e.g., by describing which query aspects were covered by each document).…”
Section: Contribution To Knowledge For Xaimentioning
confidence: 99%
“…Previous XAI literature has contributed to explaining information retrieval systems, focusing on the interpretability of document-retrieval mechanisms [25,26,52]. For example, the authors of [52] propose a listwise explanation generator, which provides an explanation that covers all the documents contained in the page (e.g., by describing which query aspects were covered by each document).…”
Section: Contribution To Knowledge For Xaimentioning
confidence: 99%
“…Note that a common problem in both approaches is due to an upper bound on the acceptable input length of contextual models that restricts its applicability to shorter documents. When documents do not fit into the model the documents are chunked into passages/sentences to fit within token limit either by using transformer-kernels [18,19], truncation [7], or careful pre-selection of relevant text [26,53].…”
Section: Contextual Models For Ad-hoc Document Retrievalmentioning
confidence: 99%