We investigate dissipative extensions of the Su-Schrieffer-Heeger model with regard to different approaches of modeling dissipation. In doing so, we use two distinct frameworks to describe the gain and loss of particles, one uses Lindblad operators within the scope of Lindblad master equations, the other uses complex potentials as an effective description of dissipation. The reservoirs are chosen in such a way that the non-Hermitian complex potentials are PT -symmetric. From the effective theory we extract a state which has similar properties as the non-equilibrium steady state following from Lindblad master equations with respect to lattice site occupation. We find considerable similarities in the spectra of the effective Hamiltonian and the corresponding Liouvillean. Further, we generalize the concept of the Zak phase to the dissipative scenario in terms of the Lindblad description and relate it to the topological phases of the underlying Hermitian Hamiltonian.
Abstract. We numerically investigate topological phases of periodic lattice systems in tight-binding description under the influence of dissipation. The effects of dissipation are effectively described by PT -symmetric potentials. In this framework we develop a general numerical gauge smoothing procedure to calculate complex Berry phases from the biorthogonal basis of the system's non-Hermitian Hamiltonian. Further, we apply this method to a one-dimensional PT -symmetric lattice system and verify our numerical results by an analytical calculation.
We propose a modular extension of backpropagation for computation of blockdiagonal approximations to various curvature matrices of the training objective (in particular, the Hessian, generalized Gauss-Newton, and positive-curvature Hessian). The approach reduces the otherwise tedious manual derivation of these matrices into local modules, and is easy to integrate into existing machine learning libraries. Moreover, we develop a compact notation derived from matrix differential calculus. We outline different strategies applicable to our method. They subsume recently-proposed block-diagonal approximations as special cases, and we extend the concepts presented therein to convolutional neural networks.Preprint. Under review.
Automatic differentiation frameworks are optimized for exactly one thing: computing the average mini-batch gradient. Yet, other quantities such as the variance of the mini-batch gradients or many approximations to the Hessian can, in theory, be computed efficiently, and at the same time as the gradient. While these quantities are of great interest to researchers and practitioners, current deep-learning software does not support their automatic calculation. Manually implementing them is burdensome, inefficient if done naïvely, and the resulting code is rarely shared. This hampers progress in deep learning, and unnecessarily narrows research to focus on gradient descent and its variants; it also complicates replication studies and comparisons between newly developed methods that require those quantities, to the point of impossibility. To address this problem, we introduce BACKPACK 1 , an efficient framework built on top of PYTORCH, that extends the backpropagation algorithm to extract additional information from first-and second-order derivatives. Its capabilities are illustrated by benchmark reports for computing additional quantities on deep neural networks, and an example application by testing several recent curvature approximations for optimization. * Equal contributions 1 https://f-dangel.github.io/backpack/
Curvature in form of the Hessian or its generalized Gauss-Newton (GGN) approximation is valuable for algorithms that rely on a local model for the loss to train, compress, or explain deep networks. Existing methods based on implicit multiplication via automatic differentiation or Kronecker-factored block diagonal approximations do not consider noise in the mini-batch. We present VIVIT, a curvature model that leverages the GGN's low-rank structure without further approximations. It allows for efficient computation of eigenvalues, eigenvectors, as well as per-sample first-and second-order directional derivatives. The representation is computed in parallel with gradients in one backward pass and offers a fine-grained cost-accuracy trade-off, which allows it to scale. As examples for VIVIT's usefulness, we investigate the directional gradients and curvatures during training, and how noise information can be used to improve the stability of second-order methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.