2018
DOI: 10.1038/s41534-017-0052-0
|View full text |Cite
|
Sign up to set email alerts
|

Experimental quantum verification in the presence of temporally correlated noise

Abstract: Growth in the capabilities of quantum information hardware mandates access to techniques for performance verification that function under realistic laboratory conditions. Here we experimentally characterise the impact of common temporally correlated noise processes on both randomised benchmarking (RB) and gate-set tomography (GST). Our analysis highlights the role of sequence structure in enhancing or suppressing the sensitivity of quantum verification protocols to either slowly or rapidly varying noise, which… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
52
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 59 publications
(53 citation statements)
references
References 28 publications
1
52
0
Order By: Relevance
“…Furthermore, the optimization can change the relative size of errors from different components of the device, leading to misunderstandings about their relative quality. This misidentification of error was recently observed experimentally in a trapped ion processor: in particular, figure 4(h) in [19] demonstrates that the gauge optimization procedure may effectively cancel some gate errors that were purposely added, resulting in a smaller 'diamond norm distance' than expected, which implies unrealistically good quantum gates. In that paper, and in other systems where, for example, a basis change in the classical software is used to achieve certain 'virtual' gates [20,21], experimentalists have a priori information about which operations are better.…”
Section: Operational Interpretations Of Figures Of Meritsupporting
confidence: 57%
“…Furthermore, the optimization can change the relative size of errors from different components of the device, leading to misunderstandings about their relative quality. This misidentification of error was recently observed experimentally in a trapped ion processor: in particular, figure 4(h) in [19] demonstrates that the gauge optimization procedure may effectively cancel some gate errors that were purposely added, resulting in a smaller 'diamond norm distance' than expected, which implies unrealistically good quantum gates. In that paper, and in other systems where, for example, a basis change in the classical software is used to achieve certain 'virtual' gates [20,21], experimentalists have a priori information about which operations are better.…”
Section: Operational Interpretations Of Figures Of Meritsupporting
confidence: 57%
“…RB was first used to demonstrate average error probabilities of less than 0.5% in π/2 pulses in a 9 Be + qubit [185], and has since become a standard gate characterization tool for QC research. While RB is a good way of extracting stochastic errors that determine gate fidelity, it performs less well in the case of correlated errors-which may be either cancelled out or amplified depending upon the exact random gate series applied-so RB thus provides little information about the magnitude of such correlated errors [191,192]. Although it is a good method of decoupling SPaM errors from gate errors, randomized benchmarking does not fully characterize the gate errors that are present.…”
Section: Gate Characterization: Tomography Benchmarking and Calibramentioning
confidence: 99%
“…The key underlying concept is that in a randomized benchmarking sequence built up from many operations, the resultant net state transformation in the presence of noise,Ũ eff |ψ ( Fig. 1b), is determined by an interplay of both the sensitivity of each individual operation to the noise [35] and the impact of the sequence structure on error accumulation [32,46,48]. Specifically, nominally equivalent randomized benchmarking sequences (constructed to perform the same net operation) exhibit variations in correlated-noise susceptibility that are analytically calculable and verifiable in experiments.…”
Section: Identifying Signatures Of Error Correlations In Circuitsmentioning
confidence: 99%