2022
DOI: 10.1088/2632-2153/ac48a2
|View full text |Cite
|
Sign up to set email alerts
|

Differentiable programming of isometric tensor networks

Abstract: Differentiable programming is a new programming paradigm which enables large scale optimization through automatic calculation of gradients also known as auto-differentiation. This concept emerges from deep learning, and has also been generalized to tensor network optimizations. Here, we extend the differentiable programming to tensor networks with isometric constraints with applications to multiscale entanglement renormalization ansatz (MERA) and tensor network renormalization (TNR). By introducing several gr… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 58 publications
(80 reference statements)
0
9
0
Order By: Relevance
“…For example, AD can be used for state-of-the-art infinite PEPS calculations and for calculating critical properties of classical systems [74]. In addition, it has proven useful for optimizing tensor networks with unitary or isometric constraints like quantum circuits, MERA, and gauged MPS [75][76][77][78] as well as for computing excitations and structure factors of MPS and PEPS [79,80]. The unique index system and generic high level interface makes ITensor ideal for defining differentiation through a variety of ITensor operations.…”
Section: Future Directionsmentioning
confidence: 99%
“…For example, AD can be used for state-of-the-art infinite PEPS calculations and for calculating critical properties of classical systems [74]. In addition, it has proven useful for optimizing tensor networks with unitary or isometric constraints like quantum circuits, MERA, and gauged MPS [75][76][77][78] as well as for computing excitations and structure factors of MPS and PEPS [79,80]. The unique index system and generic high level interface makes ITensor ideal for defining differentiation through a variety of ITensor operations.…”
Section: Future Directionsmentioning
confidence: 99%
“…First, DMRG is hard to implement in standard ML frameworks, especially when combining TNs and neuronal layers [49]. The algorithm has to be handcrafted for each problem [23]. Second, generalization to higher dimensions is possible [22,50] but not as efficient as for MPS due to entropy scaling [9].…”
Section: (C) Optimization Methodsmentioning
confidence: 99%
“…Tools from differential geometry can be used for analysing the TN on the space of entanglement patterns [55] and optimizing on loss manifolds [56]. This kind of optimization performs well on high dimensional parameter spaces, especially in combination with stochastic gradient descent [57] and auto-differentiation on individual nodes [58,59] or whole layers [23].…”
Section: (C) Optimization Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…[26] was nine qubits. Facing this challenge, in this work, we further develop efficient numerical methods based on matrix product state (MPS) [33][34][35][36] for calculating entanglement features and solving reconstruction coefficients. The key idea is to pack the entanglement feature (purities in different entanglement regions) of the classical snapshot state U † |b into a fictitious quantum many-body state and represent this entanglement feature state by an MPS.…”
Section: Introductionmentioning
confidence: 99%