ESANN 2021 Proceedings 2021
DOI: 10.14428/esann/2021.es2021-40
|View full text |Cite
|
Sign up to set email alerts
|

AGLVQ - Making Generalized Vector Quantization Algorithms Aware of Context

Abstract: Generalized Learning Vector Quantization methods are a powerful and robust approach for classification tasks. They compare incoming samples with representative prototypes for each target class. While prototypes are physically interpretable, they do not account for changes in the environment. We propose a novel framework for the incorporation of context information into prototype generation. We can model dependencies in a modular way ranging from polynomials to neural networks. Evaluations on artificial and rea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 5 publications
0
3
0
Order By: Relevance
“…Contributions of the special session on "Interpretable Models in Machine Learning and Explainable Artificial Intelligence" cover a broad range of the previously mentioned aspects: interpretability of prototype-based methods for classification and efficient data representation [49,20,29], interpretability of Support Vector Machines (SVMs) [54], interpretability of random forests [38], explainability of black-box models [12,26,35], and informativeness of linguistic properties in word representations [5].…”
Section: Contributions From Esann 2021mentioning
confidence: 99%
See 1 more Smart Citation
“…Contributions of the special session on "Interpretable Models in Machine Learning and Explainable Artificial Intelligence" cover a broad range of the previously mentioned aspects: interpretability of prototype-based methods for classification and efficient data representation [49,20,29], interpretability of Support Vector Machines (SVMs) [54], interpretability of random forests [38], explainability of black-box models [12,26,35], and informativeness of linguistic properties in word representations [5].…”
Section: Contributions From Esann 2021mentioning
confidence: 99%
“…With respect to prototype-based models, the approach described by Kaden et al [29] realizes information bottleneck learning by combining counterpropagation and LVQ, whereas Graeber et al [20] uses context information and prototype adaption while inference for better LVQ performance and interpretability. Taylor and Merényi [49] propose an improvement to t-SNE which allows automated specification of its perplexity parameter using topological information about a data manifold revealed through prototype-based learning.…”
Section: Contributions From Esann 2021mentioning
confidence: 99%
“…, 2018). The president of the Australian Association of University Professors (Graeber, 2021) suggested academic work becomes counter-productive when purely inwards facing. For example, Joseph et al.…”
Section: Introductionmentioning
confidence: 99%