2022
DOI: 10.48550/arxiv.2205.04025
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Sketching the Best Approximate Quantum Compiling Problem

Abstract: This paper considers the problem of quantum compilation from an optimization perspective by fixing a circuit structure of CNOTs and rotation gates then optimizing over the rotation angles. We solve the optimization problem classically and consider algorithmic tools to scale it to higher numbers of qubits. We investigate stochastic gradient descent and two sketchand-solve algorithms. For all three algorithms, we compute the gradient efficiently using matrix-vector instead of matrix-matrix computations. Allowing… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(6 citation statements)
references
References 21 publications
0
6
0
Order By: Relevance
“…(1) L (α) ≈ 0, all the flip terms are switched off and we arrive at the desired solution V (θ) |0 ≈ |ψ 0 . In A we discuss this weighting scheme in more detail, as well as our implementation of a fast-gradient approach based on [37] which allows us to do experiments with up to 24 qubits. We also developed a number of techniques to improve performance, in particular we use a "surrogate model" which speeds up the calculation of the gradient -we discuss these aspects of the algorithm in B.…”
Section: Algorithmmentioning
confidence: 99%
See 4 more Smart Citations
“…(1) L (α) ≈ 0, all the flip terms are switched off and we arrive at the desired solution V (θ) |0 ≈ |ψ 0 . In A we discuss this weighting scheme in more detail, as well as our implementation of a fast-gradient approach based on [37] which allows us to do experiments with up to 24 qubits. We also developed a number of techniques to improve performance, in particular we use a "surrogate model" which speeds up the calculation of the gradient -we discuss these aspects of the algorithm in B.…”
Section: Algorithmmentioning
confidence: 99%
“…can enhance the optimisation procedure due to the reduced noise in the gradient profile. To increase speed, we apply a fast gradient approach based on [37] by using a more economical matrix-vector multiplication. However this approach must be modified in order to apply it to our case; the main bottleneck is that the gradient of every term in the sum in equation ( 31) should be computed separately and then all of them summed up.…”
Section: A Weighting Schemementioning
confidence: 99%
See 3 more Smart Citations