“…Variational inference refers to the use of greatly simplified, approximate distributions to draw inferences from complex models. Variational neural networks were pioneered with the Boltzmann machine (Hinton, 2007), Helmholtz machine (Dayan et al, 1995), and their later generalization, the variational autoencoder (VAE, Kingma and Welling, 2014) and have since been further generalized to networks of many kinds (Zhang et al, 2019), including networks that imitate neurobiology (Mostafa and Cauwenberghs, 2018;Neftci et al, 2016). In these models, sampling from the variational distributions is used to approximate an intractable integral in the objective function, to learn regularized solutions, and to generate plausible out-of-sample realizations from the learned, latent distributions.…”