Seventh IEEE International Conference on Data Mining Workshops (ICDMW 2007) 2007
DOI: 10.1109/icdmw.2007.127
|View full text |Cite
|
Sign up to set email alerts
|

Generalized Additive Models from a Neural Network Perspective

Abstract: Recently, an interactive algorithm was proposed for the construction of generalized additive neural networks. Although the proposed method is sound, it has two drawbacks. It is subjective as it relies on the modeler to identify complex trends in partial residual plots and it can be very time consuming as multiple iterations of pruning and adding neurons to hidden layers of the neural network have to be done. In this article, an automatic algorithm is proposed that alleviates both drawbacks. Given a predictive … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 12 publications
0
3
0
Order By: Relevance
“…We will show how a shallow network, the Multi-Layer Perceptron (MLP) can be fully explained by formulating it as a General Additive Neural Network (GANN). This methodology has a long history [4]. However, to our knowledge there is no method to derive the GANN from data, rather a model structure needs to be assumed or hypothesized from experimental data analysis.…”
Section: Introductionmentioning
confidence: 99%
“…We will show how a shallow network, the Multi-Layer Perceptron (MLP) can be fully explained by formulating it as a General Additive Neural Network (GANN). This methodology has a long history [4]. However, to our knowledge there is no method to derive the GANN from data, rather a model structure needs to be assumed or hypothesized from experimental data analysis.…”
Section: Introductionmentioning
confidence: 99%
“…The idea of generating feature-level interpretability in deep neural networks by translating GAMs into a neural framework was already introduced by Potts (1999) and expanded by de Waal and du Toit (2007). While the framework was remarkably parameter-sparse, it did not use backpropagation and hence did not achieve as good predictive results as GAMs, while remaining less interpretable.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Chang et al (2021) introduced NODE-GAM, a differentiable model based on forgetful decision trees developed for high-risk domains. All these models follow the additive framework from GAMs and learn the nonlinear additive features with separate networks, one for each feature or feature interaction, either leveraging MLPs (Potts, 1999;de Waal and du Toit, 2007;Agarwal et al, 2021;Yang et al, 2021;Radenovic et al, 2022), using decision trees (Chang et al, 2021) or using Splines (Rügamer et al, 2020;Seifert et al, 2022;Luber et al, 2023).…”
Section: Literature Reviewmentioning
confidence: 99%