Proceedings of the 14th ACM International Conference on Web Search and Data Mining 2021
DOI: 10.1145/3437963.3441796
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable Ranking with Generalized Additive Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 23 publications
(13 citation statements)
references
References 34 publications
0
13
0
Order By: Relevance
“…Thus, webpage and document ranking methods are being used to sort and recommend corresponding information for any web inquiries effectively [21]- [23]. Few of recent work employed advanced ML techniques such as Generalized Additive Models to provide global explanations for rankings [24]. Such mechanisms are considered non-competitive ranking since web pages or documents are not necessarily in direct competition to boost their rankings for cyclic evaluations.…”
Section: A Non-competitive Rankingmentioning
confidence: 99%
“…Thus, webpage and document ranking methods are being used to sort and recommend corresponding information for any web inquiries effectively [21]- [23]. Few of recent work employed advanced ML techniques such as Generalized Additive Models to provide global explanations for rankings [24]. Such mechanisms are considered non-competitive ranking since web pages or documents are not necessarily in direct competition to boost their rankings for cyclic evaluations.…”
Section: A Non-competitive Rankingmentioning
confidence: 99%
“…Intrinsically interpretable models , such as linear regression, logistic regression, decision trees, general additive models, or combinations of business decision rules, are characterized by their transparency and by a self-explainable structure. They are generally applied for use cases with legal or policy constraints (Zhuang et al, 2020), but they may well be not accurate enough for tasks such as fraud detection, which have high financial stakes. This explains why more accurate black box models look appealing as soon as a post hoc interpretability method is applied to provide explanations on either how they work or on their results.…”
Section: Sota Review Of Fraud Detectionmentioning
confidence: 99%
“…𝑥 𝑖 ) is modulated by a function of another "context" feature (i.e. 𝑐 𝑖 ) [31]. In other cases, the interaction amongst features is adequately approximated by additive models of univariate functions nested within univariate functions, 𝑓 (𝑥 1 , 𝑥 2 , 𝑥 3 ) ≈ 𝑔 1 (𝑓 1,1 (𝑥 1 ) + 𝑓 1,2 (𝑥 2 ) + 𝑓 1,3 (𝑥 3 )), or 𝑓 (𝑥 1 , 𝑥 2 , 𝑥 3 ) ≈ 𝑔 1 (𝑓 1,1 (𝑥 1 ) + 𝑓 1,2 (𝑥 2 ) + 𝑓 1,3 (𝑥 3 )) + 𝑔 2 (𝑓 2,1 (𝑥 1 ) + 𝑓 2,2 (𝑥 2 ) + 𝑓 2,3 (𝑥 3 )) + .…”
Section: Introductionmentioning
confidence: 99%
“…For example, the proposed distillation methodology can be applied to additive models trained using bagged boosted decision trees, as demonstrated in our results. Similarly, the technique can be applied to additive neural nets [2], as demonstrated by Zhuang et al [31]. Listing 1.1 and Figure 1.1 show textual and graphical representations of a model obtained by applying our approach to a decision forest GAM learned from the COMPAS dataset.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation