2023
DOI: 10.1088/2632-2153/acb316
|View full text |Cite
|
Sign up to set email alerts
|

Optimizing training trajectories in variational autoencoders via latent Bayesian optimization approach *

Abstract: Unsupervised and semi-supervised ML methods such as variational autoencoders (VAE) have become widely adopted across multiple areas of physics, chemistry, and materials sciences due to their capability in disentangling representations and ability to find latent manifolds for classification and/or regression of complex experimental data. Like other ML problems, VAEs require hyperparameter tuning, e.g., balancing the Kullback–Leibler (KL) and reconstruction terms. However, the training process and resulting mani… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 58 publications
0
4
0
Order By: Relevance
“…al. 47 attempted the approach to project into a low-dimensional latent space with variational autoencoder; and Oh et. al.…”
Section: Human-in-the Loop Based Automated Experiments (Ae) Workflowmentioning
confidence: 99%
“…al. 47 attempted the approach to project into a low-dimensional latent space with variational autoencoder; and Oh et. al.…”
Section: Human-in-the Loop Based Automated Experiments (Ae) Workflowmentioning
confidence: 99%
“…However, tuning this high-dimensional iterative dependent hyper-parameter would be a very time-consuming task (given the sh-VAE model is computationally very expensive) which need to be done prior to the training. However, one approach we took earlier in tuning such trajectory hyperparameter is through latent Bayesian optimization approach [35] but would still take a significant amount of time to conduct such optimization task. Here, bypassing the need and the mentioned challenges of prior tuning of particularly high-dimensional hyperparameters (e.g.…”
Section: Analysis: Stem Images Of Nio-lsmo Heterostructurementioning
confidence: 99%
“…The development of the rotationally invariant autoencoders extended these approaches to general orientation and discovery of chemical transformation pathways in disordered systems [31][32][33][34]. Also, such rotationally invariant autoencoders have been tuned with Bayesian optimization to maximize uncovering of the features from complex microscopic data [35]. However, for the time being these methods relied on the descriptors centered at the individual atomic units, whereas unsupervised discovery allowing for translation invariance has remained far more complex.…”
Section: Introductionmentioning
confidence: 99%
“…Buckell et al incorporated the displayed preference data into the model to calibrate the market forecast [13]. Furthermore, numerous researchers have employed Bayesian optimization techniques to optimize hyperparameters [14][15][16].…”
Section: Introductionmentioning
confidence: 99%