2022
DOI: 10.1101/2022.09.30.510350
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Holographic-(V)AE: an end-to-end SO(3)-Equivariant (Variational) Autoencoder in Fourier Space

Abstract: Group-equivariant neural networks have emerged as a data-efficient approach to solve classification and regression tasks, while respecting the relevant symmetries of the data. However, little work has been done to extend this paradigm to the unsupervised and generative domains. Here, we present Holographic-(V)AE (H-(V)AE), a fully end-to-end SO(3)-equivariant (variational) autoencoder in Fourier space, suitable for unsupervised learning and generation of data distributed around a specified origin. H-(V)AE is t… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 56 publications
0
3
0
Order By: Relevance
“…HERMES architecture. We build upon improvements to the HCNN architecture presented in [2], leading to a model that is ∼2.75x faster, more accurate, with comparable number of parameters (∼3.5M) as the architecture in [1] (Figure 1A). In short, atomic neighborhoods -i.e., featurized point clouds -are first projected onto the orthonormal Zernike Fourier Basis, centered at the (masked) central residue's Cα.…”
Section: A Development Of Hermesmentioning
confidence: 99%
See 2 more Smart Citations
“…HERMES architecture. We build upon improvements to the HCNN architecture presented in [2], leading to a model that is ∼2.75x faster, more accurate, with comparable number of parameters (∼3.5M) as the architecture in [1] (Figure 1A). In short, atomic neighborhoods -i.e., featurized point clouds -are first projected onto the orthonormal Zernike Fourier Basis, centered at the (masked) central residue's Cα.…”
Section: A Development Of Hermesmentioning
confidence: 99%
“…Finally, the embedding is fed to an MLP to generate the desired predictions. We refer the reader to [2] for details of the architecture, and for a mathematical introduction to building SO(3)-equivariant models in Fourier space. We implement HERMES using e3nn [31].…”
Section: A Development Of Hermesmentioning
confidence: 99%
See 1 more Smart Citation