According to the eigenstate thermalization hypothesis (ETH), the eigenstate-to-eigenstate fluctuations of expectation values of local observables should decrease with increasing system size. In approaching the thermodynamic limit-the number of sites and the particle number increasing at the same rate-the fluctuations should scale as ∼D −1/2 with the Hilbert space dimension D. Here, we study a different limit-the classical or semiclassical limit-by increasing the particle number in fixed lattice topologies. We focus on the paradigmatic Bose-Hubbard system, which is quantum-chaotic for large lattices and shows mixed behavior for small lattices. We derive expressions for the expected scaling, assuming ideal eigenstates having Gaussian-distributed random components. We show numerically that, for larger lattices, ETH scaling of physical midspectrum eigenstates follows the ideal (Gaussian) expectation, but for smaller lattices, the scaling occurs via a different exponent. We examine several plausible mechanisms for this anomalous scaling.
We consider gradient descent with 'momentum', a widely used method for loss function minimization in machine learning. This method is often used with 'Nesterov acceleration', meaning that the gradient is evaluated not at the current position in parameter space, but at the estimated position after one step. In this work, we show that the algorithm can be improved by extending this 'acceleration'by using the gradient at an estimated position several steps ahead rather than just one step ahead. How far one looks ahead in this 'super-acceleration' algorithm is determined by a new hyperparameter. Considering a one-parameter quadratic loss function, the optimal value of the super-acceleration can be exactly calculated and analytically estimated. We show explicitly that super-accelerating the momentum algorithm is beneficial, not only for this idealized problem, but also for several synthetic loss landscapes and for the MNIST classification task with neural networks. Super-acceleration is also easy to incorporate into adaptive algorithms like RMSProp or Adam, and is shown to improve these algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.