2019 18th IEEE International Conference on Machine Learning and Applications (ICMLA) 2019
DOI: 10.1109/icmla.2019.00125
|View full text |Cite
|
Sign up to set email alerts
|

Disentangling and Learning Robust Representations with Natural Clustering

Abstract: Learning representations that disentangle the underlying factors of variability in data is an intuitive way to achieve generalization in deep models. In this work, we address the scenario where generative factors present a multimodal distribution due to the existence of class distinction in the data. We propose N-VAE, a model which is capable of separating factors of variation which are exclusive to certain classes from factors that are shared among classes. This model implements an explicitly compositional la… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 3 publications
0
6
0
Order By: Relevance
“…Several methods exist to identify counterfactual explanations, such as FACE [22], which uses the shortest path to identify counterfactual explanations from high-density regions, and Growing Spheres (GS) [16] which employs random sampling within increasing hyperspheres for nding counterfactuals. CLUE [3] identies counterfactuals with low uncertainty in terms of the classier's entropy within the data distribution. Similarly, manifold-based CCHVAE [21] generates high-density counterfactuals through the use of a latent space model.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Several methods exist to identify counterfactual explanations, such as FACE [22], which uses the shortest path to identify counterfactual explanations from high-density regions, and Growing Spheres (GS) [16] which employs random sampling within increasing hyperspheres for nding counterfactuals. CLUE [3] identies counterfactuals with low uncertainty in terms of the classier's entropy within the data distribution. Similarly, manifold-based CCHVAE [21] generates high-density counterfactuals through the use of a latent space model.…”
Section: Related Workmentioning
confidence: 99%
“…We used the native CARLA catalog for the Give Me Some Credit (GMSC) [12], Adult Income (Adult) [9] and Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) [2] data sets as well as pre-trained models (both the Neural Network (NN) and Logistic Regression (LR)). NN has three hidden layers of size [18,9,3], and the LR is a single input layer leading to a Softmax function. Although AR is proposed for linear models, it can be extended to nonlinear models by the local linear decision boundary approximation method LIME [24] (referred as AR-LIME).…”
Section: Empirical Evaluationmentioning
confidence: 99%
“…For instance, the influence of different architecture and training procedure on outputs and their better description can help with the choice of a proper model for a given problem and, in general, with transparency and trustworthiness. In the same way, it is essential to have well-calibrated uncertainty for believing the prediction outputs [13,14]. Uncertainty estimates can also serve as a means of transparency as they inform when the model does not know the correct prediction [15].…”
Section: Bayesian Neural Network Distributional Propertiesmentioning
confidence: 99%
“…Our usage of the Bhattacharyya coefficient is another key difference. Antoran and Miguel (2019) also structured the latent space to support both class-related and classdependent factors. However, their method is completely supervised and hence cannot be used in many cases.…”
Section: Related Workmentioning
confidence: 99%
“…; Chen et al (2018); Lavda et al (2019)), semi-supervised (Siddharth et al (2017); Joy et al (2020); Kim et al (2020)), and supervised ones(Klys et al (2018);Antoran and Miguel (2019)). …”
mentioning
confidence: 99%