Neural networks can be used to identify phases and phase transitions in condensed matter systems via supervised machine learning. Readily programmable through modern software libraries, we show that a standard feed-forward neural network can be trained to detect multiple types of order parameter directly from raw state configurations sampled with Monte Carlo. In addition, they can detect highly nontrivial states such as Coulomb phases, and if modified to a convolutional neural network, topological phases with no conventional order parameter. We show that this classification occurs within the neural network without knowledge of the Hamiltonian or even the general locality of interactions. These results demonstrate the power of machine learning as a basic research tool in the field of condensed matter and statistical physics.
arXiv:1605.01735v1 [cond-mat.str-el] 5 May 20162 Condensed matter physics is the study of the collective behavior of massively complex assemblies of electrons, nuclei, magnetic moments, atoms or qubits [1]. This complexity is reflected in the size of the classical or quantum state space, which grows exponentially with the number of particles. This exponential growth is reminiscent of the "curse of dimensionality" commonly encountered in machine learning. That is, a target function to be learned requires an amount of training data that grows exponentially in the dimension (e.g. the number of image features). Despite this curse, the machine learning community has developed a number of techniques with remarkable abilities to recognize, classify, and characterize complex sets of data. In light of this success, it is natural to ask whether such techniques could be applied to the arena of condensed-matter physics, particularly in cases where the microscopic Hamiltonian contains strong interactions, where numerical simulations are typically employed in the study of phases and phase transitions [2,3]. We demonstrate that modern machine learning architectures, such as fully-connected and convolutional neural networks [4], can provide a complementary approach to identifying phases and phase transitions in a variety of systems in condensed matter physics. The training of neural networks on data sets obtained by Monte Carlo sampling provides a particularly powerful and simple framework for the supervised learning of phases and phase boundaries in physical models, and can be easily built from readily-available tools such as Theano [5] or TensorFlow [6] libraries.Conventionally, the study of phases in condensed matter systems is performed with the help of tools that have been carefully designed to elucidate the underlying physical structures of various states. Among the most powerful are Monte Carlo simulations, which consist of two steps: a stochastic importance sampling over state space, and the evaluation of estimators for physical quantities calculated from these samples [3]. These estimators are constructed based on a variety of physical impetuses; e.g. the ready availability of an analogous experi...