2022
DOI: 10.1063/5.0110322
|View full text |Cite
|
Sign up to set email alerts
|

Deep convolutional neural networks for generating atomistic configurations of multi-component macromolecules from coarse-grained models

Abstract: Despite the modern advances in the available computational resources, the length and time scales of the physical systems that can be studied in full atomic detail, via molecular simulations, are still limited. To overcome such limitations, coarse-grained (CG) models have been developed to reduce the dimensionality of the physical system under study. However, to study such systems at the atomic level, it is necessary to re-introduce the atomistic details into the CG description. Such an ill-posed mathematical p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 55 publications
0
9
0
Order By: Relevance
“…Doxastakis and co-workers trained a generative adversarial network (GAN) to perform deterministic back-mapping of polymer configurations in analogy to super-resolution image reconstruction, in which the CG and AA configurations correspond to low- and high-resolution images, respectively . Similarly, Harmandaris and co-workers recently trained a convolutional neural network to predict AA configurations for polymer chains by predicting atomic bond vectors conditioned upon the CG coordinates and chemistry of the corresponding monomer . After relaxing local intermolecular interactions, the resulting polymer melt quite accurately matched the equilibrium structural and thermodynamic properties of the AA polymer model.…”
Section: Coarse-grained Representationmentioning
confidence: 99%
“…Doxastakis and co-workers trained a generative adversarial network (GAN) to perform deterministic back-mapping of polymer configurations in analogy to super-resolution image reconstruction, in which the CG and AA configurations correspond to low- and high-resolution images, respectively . Similarly, Harmandaris and co-workers recently trained a convolutional neural network to predict AA configurations for polymer chains by predicting atomic bond vectors conditioned upon the CG coordinates and chemistry of the corresponding monomer . After relaxing local intermolecular interactions, the resulting polymer melt quite accurately matched the equilibrium structural and thermodynamic properties of the AA polymer model.…”
Section: Coarse-grained Representationmentioning
confidence: 99%
“…Li and coworkers 261 implemented a method based on convolutional conditional generative adversarial networks (GANs), following an image representation of red−green−blue values extracted by the systems position vectors and using cis-1,4-polyisoprene melt as test system. In a subsequent work 262 they developed a backmapping strategy that makes use of convolutional autoencoder neural networks. They trained the models to learn to reconstruct the atomistic detail at the level of individual monomers, accounting for their type (each monomer was mapped to a CG bead), using the distribution of bond lengths conditioned on the CG position as target.…”
Section: Featurizationmentioning
confidence: 99%
“…(Upper line) Schematic representation of the strategy implemented by Christofi et al to decode CG systems of polybutadiene random copolymers consisting of three monomer types: cis-1,4, trans-1,4, and vinyl-1,2 monomers. (Lower line) Details of the monomer description used: blue indicate the bond vectors, green indicate the distances between the atomistic particles q i (a united atoms representation was considered) with respect to the center of mass of the monomer Q (in red), q 0 denotes the coordinates of the last united atom particle of the previous monomer along the chain.…”
Section: Machine Learning Enabled Macromolecular Coarse-grained Simul...mentioning
confidence: 99%
See 1 more Smart Citation
“…An autoencoder is typically trained by minimizing the mismatch (i.e., loss) between the reconstructed data and the input data. Materials scientists have applied autoencoders for multiple purposes such as reconstruction of experimental characterization data, molecular structure, or microscopy images; ,,, clustering , and/or classification ,,, of the latent space representations; obtaining material design parameters , or deriving order parameters from latent space representations; and molecular or material property optimization based on the latent space representations. Some of the studies mentioned here ,,,, used a modified version of the autoencoder called a variational autoencoder (VAE) that maps the encoded latent space to a multidimensional standard Gaussian distribution, which has the benefit of a continuous latent space compared to the sparse latent space that one would get from the encodings of an unmodified autoencoder.…”
Section: Introductionmentioning
confidence: 99%