Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval 2019
DOI: 10.1145/3331184.3331312
|View full text |Cite
|
Sign up to set email alerts
|

A study on the Interpretability of Neural Retrieval Models using DeepSHAP

Abstract: A recent trend in IR has been the usage of neural networks to learn retrieval models for text based adhoc search. While various approaches and architectures have yielded significantly better performance than traditional retrieval models such as BM25, it is still difficult to understand exactly why a document is relevant to a query. In the ML community several approaches for explaining decisions made by deep neural networks have been proposedincluding DeepSHAP which modifies the DeepLift algorithm to estimate t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 54 publications
(18 citation statements)
references
References 15 publications
0
18
0
Order By: Relevance
“…Different from explainable recommendation, the studies on explainable search mostly focuses on the domain of ad-hoc retrieval, i.e., retrieving text documents such as news articles or web pages based on user's query. For example, Zeon Trevor et al [19] proposes to use DeepSHAP [39] to explain the outputs of neural retrieval models; Verma and Ganguly [64] explore different sampling methods to build explanation models for a given retrieval model and proposes a couple of metrics to evaluate the explanations based on the terms in queries and documents. Unfortunately, those methods are not applicable to product search as they are purely designed for text retrieval and text matching signals are relatively unimportant [1,14] compared to other information such as entity relationships and user purchase history in determining user's purchases.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Different from explainable recommendation, the studies on explainable search mostly focuses on the domain of ad-hoc retrieval, i.e., retrieving text documents such as news articles or web pages based on user's query. For example, Zeon Trevor et al [19] proposes to use DeepSHAP [39] to explain the outputs of neural retrieval models; Verma and Ganguly [64] explore different sampling methods to build explanation models for a given retrieval model and proposes a couple of metrics to evaluate the explanations based on the terms in queries and documents. Unfortunately, those methods are not applicable to product search as they are purely designed for text retrieval and text matching signals are relatively unimportant [1,14] compared to other information such as entity relationships and user purchase history in determining user's purchases.…”
Section: Related Workmentioning
confidence: 99%
“…In contrast, model-agnostic interpretability focuses on explaining model outputs without knowing the internal mechanism of the model. Previous studies on explainable IR have explored both paradigms in document retrieval [19,55,56,63] by creating pre-hoc or post-hoc explanations with text-matching signals extracted by the retrieval models from query-document pairs. In product search, however, it has been shown that text matching is relatively less important [1,14] comparing to other information such as knowledge entities and their relationships [24,37] in determining user's purchase decisions.…”
Section: Introductionmentioning
confidence: 99%
“…The rationales behind the decisions of complex learning systems are also studied in the field of interpretability in machine learning [26,59]. Recent work on extracting feature attributions using post-hoc approximations is similar to our model of explaining document preference pairs [18,[47][48][49]. However, we crucially differ from them in two ways: we use axioms as possible explanations and we employ a learning framework to measure the fidelity to the original model rather than a combinatorial framework [47].…”
Section: Axioms / Sourcesmentioning
confidence: 99%
“…Fernando et al [18] explore a model-introspective explainability method for neural ranking models. They use the DeepSHAP [37] model to generate explanations and defined five different reference to generate explanations: 1) document only containing OOV words, 2) document built by sampling words with low IDF values, 3) document consisting of words with low query-likelihood scores, 4) document sampled from the collection that is not available in the top-1000 ranked list, and 5) document that is sampled from the bottom of the top-1000 documents retrieved.…”
Section: Explainable Search and Recommendationmentioning
confidence: 99%