2023
DOI: 10.1103/physrevlett.131.210602
|View full text |Cite
|
Sign up to set email alerts
|

Universal Sampling Lower Bounds for Quantum Error Mitigation

Ryuji Takagi,
Hiroyasu Tajima,
Mile Gu
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(4 citation statements)
references
References 68 publications
1
3
0
Order By: Relevance
“…Note that by using fidelity instead of relative entropy a bound tighter than Eq. ( 51) can also be obtained, as discussed by Takagi, Tajima, and Gu (2023). Quek et al (2022) managed to construct a circuit structure for which the relative entropy of two different output states decreases exponentially in both circuit depth and the number of qubits, which allowed them to prove that the worst-case sampling lower bound for QEM scaled exponentially with the circuit size (instead of just depth), confirming our intuition obtained in Sec.…”
Section: B Benchmarking Qem From Other Perspectivessupporting
confidence: 72%
“…Note that by using fidelity instead of relative entropy a bound tighter than Eq. ( 51) can also be obtained, as discussed by Takagi, Tajima, and Gu (2023). Quek et al (2022) managed to construct a circuit structure for which the relative entropy of two different output states decreases exponentially in both circuit depth and the number of qubits, which allowed them to prove that the worst-case sampling lower bound for QEM scaled exponentially with the circuit size (instead of just depth), confirming our intuition obtained in Sec.…”
Section: B Benchmarking Qem From Other Perspectivessupporting
confidence: 72%
“…Second, it is practically important to investigate the performance of subspace methods in the context of quantum error mitigation [38]. Quantum error mitigation techniques inevitably require exponential growth in the number of measurements [73][74][75], while the exponent in the scaling heavily depends on the detail of the error mitigation methods. Thus, it is crucial to seek how the adaptive strategy benefits the trade-off relation between bias and variance.…”
Section: Discussionmentioning
confidence: 99%
“…Instead, the primary overhead arises from the fact that error-mitigated estimation necessarily comes at the cost of larger overall variances [95][96][97][98] . The quantity…”
Section: Theory Of Symmetry-adjusted Classical Shadowsmentioning
confidence: 99%
“…Any QEM strategy necessarily incurs a sampling overhead dependent on the amount of noise [95][96][97][98] . For global Clifford shadows, Chen et al 85 show that the sample complexity is augmented by a factor of OðF Z ðEÞ À2 Þ for estimating observables with constant Hilbert-Schmidt norm, where F Z ðEÞ ¼ 2 Àn P b2f0;1g n hhbjEjbii is the average Z-basis fidelity of E. Meanwhile for local Clifford shadows, they prove that product noise of the form…”
mentioning
confidence: 99%