Understanding the noise affecting a quantum device is of fundamental importance for scaling quantum technologies. A particularly important class of noise models is that of Pauli channels, as randomized compiling techniques can effectively bring any quantum channel to this form and are significantly more structured than general quantum channels. In this paper, we show fundamental lower bounds on the sample complexity for learning Pauli channels in diamond norm with unentangled measurements. We consider both adaptive and non-adaptive strategies. In the non-adaptive setting, we show a lower bound of Ω(2 3n ε −2 ) to learn an n-qubit Pauli channel. In particular, this shows that the recently introduced learning procedure by Flammia and Wallman ( 2020) is essentially optimal. In the adaptive setting, we show a lower bound of Ω(2 2.5n ε −2 ) for ε = O(2 −n ), and a lower bound of Ω(2 2n ε −2 ) for any ε > 0. This last lower bound even applies for arbitrarily many sequential uses of the channel, as long as they are only interspersed with other unital operations.
How many copies of a quantum process are necessary and sufficient to construct an approximate classical description of it? We extend the result of Surawy-Stepney, Kahn, Kueng, and Guta (2022) to show that Õ(d 3 in d 3 out /ε 2 ) copies are sufficient to learn any quantum channel C d in ×d in → C dout×dout to within ε in diamond norm. Moreover, we show that Ω(d 3 in d 3 out /ε 2 ) copies are necessary for any strategy using incoherent non-adaptive measurements. This lower bound applies even for ancilla-assisted strategies.
What advantage do sequential procedures provide over batch algorithms for testing properties of unknown distributions? Focusing on the problem of testing whether two distributions D 1 and D 2 on {1, . . . , n} are equal or ε-far, we give several answers to this question. We show that for a small alphabet size n, there is a sequential algorithm that outperforms any batch algorithm by a factor of at least 4 in terms sample complexity. For a general alphabet size n, we give a sequential algorithm that uses no more samples than its batch counterpart, and possibly fewer if the actual distance TV(D 1 , D 2 ) between D 1 and D 2 is larger than ε. As a corollary, letting ε go to 0, we obtain a sequential algorithm for testing closeness when no a priori bound on TV(D 1 , D 2 ) is given that has a sample complexity Õ(TV(D1,D2) 4/3 ): this improves over the Õ( n/ log n TV(D1,D2) 2 ) tester of Daskalakis and Kawase (2017) and is optimal up to multiplicative constants. We also establish limitations of sequential algorithms for the problem of testing identity and closeness: they can improve the worst case number of samples by at most a constant factor. d 2 log log(1/d)) samples. We design the stopping rules according to a time uniform concentration inequality deduced from McDiarmid's inequality, where we use the ideas of Howard et al. (2018, 2020) in order to obtain powers of log log(1/d) instead of log(1/d).We show that the sample complexity for the testing closeness problem given by Eq. ( 1) is optimal up to multiplicative constants in the worst case setting (i.e., when looking for a bound independent of the distributions D 1 and D 2 ). To do so, we construct two families of distributions whose cross TV distance is exactly d ≥ ε and hard to distinguish unless we have a number of samples given by Eq. ( 1). This latter lower bound is based on properties of KL divergence along with Wald's Lemma. Using similar techniques, we also establish upper and lower bounds for testing identity that match up to multiplicative constants.In addition, we establish a lower bound on the number of queries that matches Eq. ( 2) up to multiplicative constants. The proof is inspired by Karp and Kleinberg (2007) who proved lower bounds for testing whether the mean of a sequence of i.i.d. Bernoulli variables is smaller or larger than 1/2. We construct well-chosen distributions D k (for k integer) that are at distance ε k (ε k decreasing to 0) from uniform and then use properties of the Kullback-Leibler's divergence to show that no algorithm can distinguish between D k and uniform using fewer samples than in Eq. ( 2). Note that we could have used the testing closeness lower bound described in the previous paragraph and let ε = 0, however this gives sub-optimal lower bounds.Discussion of the setting and related work It is clearly impossible to test D 1 = D 2 versus D 1 = D 2 in finite time: this is why the slack parameter ε is introduced in this setting. Other authors like Daskalakis and Kawase (2017) make a different choice: they fix no ε, but only req...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.