2021
DOI: 10.1021/acsnano.0c08914
|View full text |Cite
|
Sign up to set email alerts
|

Disentangling Rotational Dynamics and Ordering Transitions in a System of Self-Organizing Protein Nanorods via Rotationally Invariant Latent Representations

Abstract: The dynamics of complex ordering systems with active rotational degrees of freedom exemplified by protein self-assembly is explored using a machine learning workflow that combines deep learning-based semantic segmentation and rotationally invariant variational autoencoder-based analysis of orientation and shape evolution. The latter allows for disentanglement of the particle orientation from other degrees of freedom and compensates for lateral shifts. The disentangled representations in the latent space encode… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2
1

Relationship

3
6

Authors

Journals

citations
Cited by 26 publications
(15 citation statements)
references
References 69 publications
0
15
0
Order By: Relevance
“…In this regard, there is a growing body of work to embed physics in machine learning models, which serve as the ultimate regularizers. For instance, rotational (Kalinin et al, 2020 ; Oxley et al, 2020 ) and Euclidean equivariance (Smidt, 2020 ; Smidt et al, 2021 ) has been built into the model architectures, and methods to learn sparse representations of underlying governing equations have been developed (Champion et al, 2019 ; de Silva et al, 2020 ; Kaheman et al, 2020 ).…”
Section: Exemplars Of Domain Applicationsmentioning
confidence: 99%
“…In this regard, there is a growing body of work to embed physics in machine learning models, which serve as the ultimate regularizers. For instance, rotational (Kalinin et al, 2020 ; Oxley et al, 2020 ) and Euclidean equivariance (Smidt, 2020 ; Smidt et al, 2021 ) has been built into the model architectures, and methods to learn sparse representations of underlying governing equations have been developed (Champion et al, 2019 ; de Silva et al, 2020 ; Kaheman et al, 2020 ).…”
Section: Exemplars Of Domain Applicationsmentioning
confidence: 99%
“…An alternative approach is developed based on rotationally invariant variational autoencoders (rVAE), a class of unsupervised machine learning methods projecting discrete large‐dimensional spaces on a continuous latent space. Previously, we applied the rVAE approach to explore the evolution of atomic‐scale structures in graphene under electron beam irradiation, [ 43 ] analyze the self‐assembly of protein nanorods, [ 44 ] investigate the domain wall dynamics in piezoresponse force microscopy, [ 45 ] and create a bottom‐up symmetry analysis workflow for atom‐resolved data. [ 46 ] Here, we demonstrate that this approach can be extended to explore domain evolution mechanisms via detailed analysis of the latent spaces of rVAE.…”
Section: Resultsmentioning
confidence: 99%
“…We used the rVAE implementation in Kalinin et al [20] and AtomAI [21]. This rVAE encodes the images into unstructured latent variables, of which we used only two, L1 and L2, and the latent variables encoding the rotational angle, L θ , and the translation in x and y, L∆x and L∆y.…”
Section: Resultsmentioning
confidence: 99%