2020
DOI: 10.48550/arxiv.2007.10505
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DeepNNK: Explaining deep models and their generalization using polytope interpolation

Sarath Shekkizhar,
Antonio Ortega

Abstract: Modern machine learning systems based on neural networks have shown great success in learning complex data patterns while being able to make good predictions on unseen data points. However, the limited interpretability of these systems hinders further progress and application to several domains in the real world. This predicament is exemplified by time consuming model selection and the difficulties faced in predictive explainability, especially in the presence of adversarial examples. In this paper, we take a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
10
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(10 citation statements)
references
References 33 publications
0
10
0
Order By: Relevance
“…DeepNNK [7] is a non-parametric interpolation framework based on local polytopes obtained using NNK graphs [10] that replaces the standard softmax classification layer of a neural network using the activations obtained at the penultimate layer as input features, for model evaluation and inference. A key advantage of DeepNNK lies in the fact that label interpolation is performed based on the relative positions of points in the training set, which makes it possible to perform leave one out estimation to characterize the task performance of a model and its generalization without the need for additional data, namely the validation set.…”
Section: Deepnnk: Neural Network and Nnk Interpolationmentioning
confidence: 99%
See 3 more Smart Citations
“…DeepNNK [7] is a non-parametric interpolation framework based on local polytopes obtained using NNK graphs [10] that replaces the standard softmax classification layer of a neural network using the activations obtained at the penultimate layer as input features, for model evaluation and inference. A key advantage of DeepNNK lies in the fact that label interpolation is performed based on the relative positions of points in the training set, which makes it possible to perform leave one out estimation to characterize the task performance of a model and its generalization without the need for additional data, namely the validation set.…”
Section: Deepnnk: Neural Network and Nnk Interpolationmentioning
confidence: 99%
“…Training is stopped for a given channel if its corresponding LOO classification performance no longer improves. Our proposed method achieves comparable test performance with fewer training iterations than existing methods, such as validation-set-based early stopping and aggregate feature vector based DeepNNK [7]. Our early stopping criterion requires lower runtimes and has the further advantage of not requiring any validation data to be set aside.…”
Section: Introductionmentioning
confidence: 97%
See 2 more Smart Citations
“…Further, NNK neighborhoods are stable for large enough K, and their size is indicative of the ID of the manifold the data belongs to [18]. This method has been shown to provide advantages for semi-supervised learning, image representation [19], and label interpolation and generalization estimation in neural networks [20]. Graph properties have also been used to interpret deep neural network performance [21], latent space geometry [22,23], and to improve model robustness [24].…”
Section: Introductionmentioning
confidence: 99%