SUMMARY Multi-task joint sparse representation (MTJSR) is one kind of efficient multi-task learning (MTL) method for solving different problems together using a shared sparse representation. Based on the learning mechanism in human, which is a self-paced learning by gradually training the tasks from easy to difficult, I apply this mechanism into MTJSR, and propose a multi-task joint sparse representation with self-paced learning (MTJSR-SP) algorithm. In MTJSR-SP, the self-paced learning mechanism is considered as a regularizer of optimization function, and an iterative optimization is applied to solve it. Comparing with the traditional MTL methods, MTJSR-SP has more robustness to the noise and outliers. The experimental results on some datasets, i.e. two synthesized datasets, four datasets from UCI machine learning repository, an oxford flower dataset and a Caltech-256 image categorization dataset, are used to validate the efficiency of MTJSR-SP.