Recent studies have shown that sparse representation (SR) can deal well with many computer vision problems, and its kernel version owns more powerful classification capability. In this paper, we address the application of a cooperative sparse representation (Co-SR) in semi-supervised image annotation which can grow the amount of labeled images for further use in training image classifiers. Provided a set of labeled (training) images and a set of unlabeled (test) images, the usual SR method, which we call forward SR, is to represent each unlabeled image with several labeled ones, and then to annotate the unlabeled image according to the annotations of these labeled ones. However, to the best of our knowledge, the SR method in an opposite direction, which we call backward SR, to represent each labeled image with several unlabeled images, and then to annotate any unlabeled image according to the annotations of the labeled images which the unlabeled image is selected by the backward SR to represent, has not been addressed by researchers. In this paper, we explore how much the backward SR can contribute to image annotation, and be complementary to the forward SR. The co-training, which has been proved to be a semi-supervised method improving each other only if two classifiers are relatively independent, is then adopted to testify this complementary nature between two SRs in opposite directions. Finally, the co-training of two SRs in kernel space builds a cooperative kernel sparse representation (Co-KSR) method for image annotation. LGC (Local and Global Consistency) and GFHF (Gaussian Fields and Harmonic Functions). Comparative experiments with a non-sparse solution are also performed to show that the sparsity plays an important role in the cooperation of image representations in two opposite directions. Our work will extend the application of sparse representation in image annotation and retrieval.