Recent progress in the design and optimization of neural-network quantum states (NQSs) has made them an effective method to investigate ground-state properties of quantum many-body systems. In contrast to the standard approach of training a separate NQS from scratch at every point of the phase diagram, we demonstrate that the optimization of a NQS at a highly expressive point of the phase diagram (i.e., close to a phase transition) yields features that can be reused to accurately describe a wide region across the transition. We demonstrate the feasibility of our approach on different systems in one and two dimensions by initially pretraining a NQS at a given point of the phase diagram, followed by fine-tuning only the output layer for all other points. Notably, the computational cost of the fine-tuning step is very low compared to the pretraining stage. We argue that the reduced cost of this paradigm has significant potential to advance the exploration of strongly correlated systems using NQS, mirroring the success of fine-tuning in machine learning and natural language processing.
Published by the American Physical Society
2024