2023
DOI: 10.1021/acs.jctc.3c00923
|View full text |Cite
|
Sign up to set email alerts
|

Permutationally Invariant Networks for Enhanced Sampling (PINES): Discovery of Multimolecular and Solvent-Inclusive Collective Variables

Nicholas S. M. Herringer,
Siva Dasetty,
Diya Gandhi
et al.

Abstract: The typically rugged nature of molecular freeenergy landscapes can frustrate efficient sampling of the thermodynamically relevant phase space due to the presence of high free-energy barriers. Enhanced sampling techniques can improve phase space exploration by accelerating sampling along particular collective variables (CVs). A number of techniques exist for the data-driven discovery of CVs parametrizing the important large-scale motions of the system. A challenge to CV discovery is learning CVs invariant to th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 110 publications
0
2
0
Order By: Relevance
“…In this way, each component in ξ has a corresponding weight which encodes its importance: a low σ means that the corresponding CV component is highly correlated to the committor. ξ can be composed of simple, intuitive CVs but can also include high-dimensional, abstract representations such as commonly used local descriptors for machine-learned interatomic potentials (e.g., atom-centered symmetry functions and the smooth overlap of atomic positions (SOAP)) or global variants such as the permutation-invariant vector (PIV). ,, Optimal values for σ and the regularization parameter λ are obtained through optimization by minimizing a loss function based on a training set, distinct from the set of configurations used in KRR (the reference set).…”
Section: From Standard To Data-driven Path Collective Variablesmentioning
confidence: 99%
“…In this way, each component in ξ has a corresponding weight which encodes its importance: a low σ means that the corresponding CV component is highly correlated to the committor. ξ can be composed of simple, intuitive CVs but can also include high-dimensional, abstract representations such as commonly used local descriptors for machine-learned interatomic potentials (e.g., atom-centered symmetry functions and the smooth overlap of atomic positions (SOAP)) or global variants such as the permutation-invariant vector (PIV). ,, Optimal values for σ and the regularization parameter λ are obtained through optimization by minimizing a loss function based on a training set, distinct from the set of configurations used in KRR (the reference set).…”
Section: From Standard To Data-driven Path Collective Variablesmentioning
confidence: 99%
“…Different from the above two hand-crafted classes of OPs, recent breakthroughs in machine learning (ML) techniques have given rise to a range of neural network (NN)-based OPs for a variety of problems, including crystal nucleation. The inherent differentiability of these OPs makes them suited for various enhanced sampling methods that involve the modification of a system’s Hamiltonian. We specifically highlight graph neural networks (GNNs), which have emerged as powerful tools in the realm of materials science, including but not limited to efficient descriptions of material energetics, , accurate predictions on material properties, , and robust classifiers of crystal structures and defects. …”
Section: Introductionmentioning
confidence: 99%