2020
DOI: 10.48550/arxiv.2010.07388
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Interpretable Machine Learning with an Ensemble of Gradient Boosting Machines

Abstract: A method for the local and global interpretation of a black-box model on the basis of the well-known generalized additive models is proposed. It can be viewed as an extension or a modification of the algorithm using the neural additive model. The method is based on using an ensemble of gradient boosting machines (GBMs) such that each GBM is learned on a single feature and produces a shape function of the feature. The ensemble is composed as a weighted sum of separate GBMs resulting a weighted sum of shape func… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 43 publications
0
5
0
Order By: Relevance
“…The impact of every feature on the prediction in these models is determined by its corresponding shape function obtained by each neural network. Following the ideas behind these interpretation models, [35] proposed a similar model. In contrast to the method proposed by [34], an ensemble of gradient boosting machines was used in [35] instead of neural networks in order to simplify the explanation model training process.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The impact of every feature on the prediction in these models is determined by its corresponding shape function obtained by each neural network. Following the ideas behind these interpretation models, [35] proposed a similar model. In contrast to the method proposed by [34], an ensemble of gradient boosting machines was used in [35] instead of neural networks in order to simplify the explanation model training process.…”
Section: Related Workmentioning
confidence: 99%
“…Following the ideas behind these interpretation models, [35] proposed a similar model. In contrast to the method proposed by [34], an ensemble of gradient boosting machines was used in [35] instead of neural networks in order to simplify the explanation model training process.…”
Section: Related Workmentioning
confidence: 99%
“…The impact of every feature on the prediction in these models is determined by its corresponding shape function obtained by each neural network. Following ideas behind these interpretation models, Konstantinov and Utkin [28] proposed a similar model. In contrast to the method proposed by Agarwal et al [3], an ensemble of gradient boosting machines is used in [28] instead of neural networks in order to simplify the explanation model training process.…”
Section: Related Workmentioning
confidence: 99%
“…The impact of every feature on the prediction in these models is determined by its corresponding shape function obtained by each neural network. Following ideas behind these interpretation models, Konstantinov and Utkin [37] proposed a similar model, but an ensemble of gradient boosting machine is used instead of neural networks in order to simplify the explanation model training process.…”
Section: Related Workmentioning
confidence: 99%
“…It should be noted that problems (37) do not have solutions for some k when inequality π U k < α L k is valid. This follows from constraints α L k ≤ α k in (39) and α k ≤ π U k in (42).…”
Section: Propositionmentioning
confidence: 99%