Spoken emotion recognition is currently a very active research topic and has attracted extensive attention in signal processing, pattern recognition, artificial intelligence, etc. In this paper, a new emotion classification method based on kernel sparse representation, named locality-constrained kernel sparse representation-based classification (LC-KSRC), is proposed for spoken emotion recognition. LC-KSRC is able to learn more discriminating sparse representation coefficients for spoken emotion recognition, since it integrates both sparsity and data locality in the kernel feature space. The proposed method is compared with six representative emotion classification methods, including linear discriminant classifier, K-nearestneighbor, radial basis function neural networks, support vector machines, sparse representation-based classification and kernel sparse representation-based classification. Experimental results on two publicly available emotional speech databases, i.e., the Berlin database and the Polish database, demonstrate the promising performance of the proposed method on spoken emotion recognition tasks, outperforming the other used methods.