A quantum learning machine for binary classification of qubit states that does not require quantum memory is introduced and shown to perform with the minimum error rate allowed by quantum mechanics for any size of the training set. This result is shown to be robust under (an arbitrary amount of) noise and under (statistical) variations in the composition of the training set, provided it is large enough. This machine can be used an arbitrary number of times without retraining. Its required classical memory grows only logarithmically with the number of training qubits, while its excess risk decreases as the inverse of this number, and twice as fast as the excess risk of an “estimate-and-discriminate” machine, which estimates the states of the training qubits and classifies the data qubit with a discrimination protocol tailored to the obtained estimates.
Sudden changes are ubiquitous in nature. Identifying them is crucial for a number of applications in biology, medicine, and social sciences. Here we take the problem of detecting sudden changes to the quantum domain. We consider a source that emits quantum particles in a default state, until a point where a mutation occurs that causes the source to switch to another state. The problem is then to find out where the change occurred. We determine the maximum probability of correctly identifying the change point, allowing for collective measurements on the whole sequence of particles emitted by the source. Then, we devise online strategies where the particles are measured individually and an answer is provided as soon as a new particle is received. We show that these online strategies substantially underperform the optimal quantum measurement, indicating that quantum sudden changes, although happening locally, are better detected globally.The detection of sudden changes in a sequence of random variables is a pivotal topic in statistics, known as the change point problem [1][2][3]. The problem has widespread applications, including the study of stock market variations [4], protein folding [5], and landscape changes [6]. In general, identifying change points plays a crucial role in all problems involving the analysis of samples collected over time [2,7] because such analysis requires the stability of the system parameters [8]. If changes are correctly detected, the sample can be conveniently divided in subsamples, which can then be analyzed by the standard statistical techniques. The detection of change points can also be viewed as a border problem [9], namely a problem where one wants to draw a separation between two (or more) different configurations -a task that plays a central role in machine learning [10].The simplest example of a change point problem is that of a coin with variable bias. Imagine that a game of Heads or Tails is played with a fair coin, but after a few rounds one player suspects that the other has replaced the coin with a biased one. After inspection of the coin, the suspicion is confirmed: the coin has now a bias. Can we identify when the coin was changed based only on the the sequence of outcomes? This classical problem has a natural extension to the quantum realm, illustrated in Figure 1: A source is promised to prepare quantum particles in some default state. At some point, however, the source undergoes a mutation and starts to produce copies of a different state. Given the sequence of particles emitted by the source, the problem is to find out when the change took place. In the basic version of the problem, the initial and final states are known, as in the classical example of the coin. No prior information is given about the location of the change: a priori, every point of the sequence is equally likely to be the change point. For simplicity, we assume the quantum states to be pure. FIG. 1:The quantum change point problem. A quantum source emits particles in a default state |0 , until the p...
min . We view p J as the probability of obtaining the outcome (M) J in a measurement of the (z component of the) total angular momentum on the unknown state. Likewise, we view π 1 J , π 2 J = 1 − π 1 J as the probabilities that the unknown state be [j AB ; J M] or [j BC ; J M] for that specific pair of outcomes J and M (note that these probabilities are actually independent of M). If the condition c 2 J /(1 + c 2 J ) π 1 J 1/(1 + c 2 J ), where c J = | j AB ; J M|j BC ; J M |, holds, then the probability of obtaining the inconclusive answer when we finally discriminate between [j AB ; J M] and [j BC ; J M] is [1] P UA J = 2 π 1 J π 2 J c J . One can prove that the condition above holds for J 1 min J < J max , whereas P UA J max = 1, and P UA J = 0 for J 2 min J < J 1 min . By adding up the contributions from the different values of J one finally obtains Eq. (A1). Proceeding along similar lines and recalling that P ME J = (1 − √ 1 − 4π 1 J π 2 J c 2 J )/2 for the minimal error [1], one can prove that Eq. (A2) in Appendix A should read
In supervised learning, an inductive learning algorithm extracts general rules from observed training instances, then the rules are applied to test instances. We show that this splitting of training and application arises naturally, in the classical setting, from a simple independence requirement with a physical interpretation of being nonsignaling. Thus, two seemingly different definitions of inductive learning happen to coincide. This follows from the properties of classical information that break down in the quantum setup. We prove a quantum de Finetti theorem for quantum channels, which shows that in the quantum case, the equivalence holds in the asymptotic setting, that is, for large numbers of test instances. This reveals a natural analogy between classical learning protocols and their quantum counterparts, justifying a similar treatment, and allowing us to inquire about standard elements in computational learning theory, such as structural risk minimization and sample complexity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.