Supervised machine learning is the classification of new data based on already classified training examples. In this work, we show that the support vector machine, an optimized binary classifier, can be implemented on a quantum computer, with complexity logarithmic in the size of the vectors and the number of training examples. In cases where classical sampling algorithms require polynomial time, an exponential speedup is obtained. At the core of this quantum big data algorithm is a nonsparse matrix exponentiation technique for efficiently performing a matrix inversion of the training data inner-product (kernel) matrix. Machine learning algorithms can be categorized along a spectrum of supervised and unsupervised learning [1][2][3][4]. In strictly unsupervised learning, the task is to find structure such as clusters in unlabeled data. Supervised learning involves a training set of already classified data, from which inferences are made to classify new data. In both cases, recent "big data" applications exhibit a growing number of features and input data. A Support Vector Machine (SVM) is a supervised machine learning algorithm that classifies vectors in a feature space into one of two sets, given training data from the sets [5]. It operates by constructing the optimal hyperplane dividing the two sets, either in the original feature space or in a higherdimensional kernel space. The SVM can be formulated as a quadratic programming problem [6], which can be solved in time proportional to O( logðϵ −1 ÞpolyðN; MÞ), with N the dimension of the feature space, M the number of training vectors, and ϵ the accuracy. In a quantum setting, binary classification was discussed in terms of Grover's search in [7] and using the adiabatic algorithm in [8][9][10][11]. Quantum learning was also discussed in [12,13].In this Letter, we show that a quantum support vector machine can be implemented with Oðlog NMÞ run time in both training and classification stages. The performance in N arises due to a fast quantum evaluation of inner products, discussed in a general machine learning context by us in [14]. For the performance in M, we reexpress the SVM as an approximate least-squares problem [15] that allows for a quantum solution with the matrix inversion algorithm [16,17]. We employ a technique for the exponentiation of nonsparse matrices recently developed in [18]. This allows us to reveal efficiently in quantum form the largest eigenvalues and corresponding eigenvectors of the training data overlap (kernel) and covariance matrices. We thus efficiently perform a low-rank approximation of these matrices [Principal Component Analysis (PCA)]. PCA is a common task arising here and in other machine learning algorithms [19][20][21]. The error dependence in the training stage is O(polyðϵ −1 K ; ϵ −1 Þ), where ϵ K is the smallest eigenvalue considered and ϵ is the accuracy. In cases when a low-rank approximation is appropriate, our quantum SVM operates on the full training set in logarithmic run time.Support vector machine.-The task for the S...