The main goal of multi-task learning (MTL) is to improve generalization ability by utilizing domain-specific information implicit in training signals of multiple related tasks. MTL achieves this goal by training multiple tasks in parallel using shared representations. The adaptation of MTL to support vector machines (SVM) is a pretty successful example. Loss function plays a crucial role in the algorithm implementation and classification accuracy of SVM. Classical support vector machine(SVM) and multi-task learning support vector machine(MTLSVM) utilize the non-differentiable and noise sensitive hinge loss function. SVM with pinball loss which penalizes correctly classified points enhances the noise insensitivity of classical SVM, but it is also non-differentiable. Both pinball loss SVM and hinge loss SVM suffer from higher computational cost because they have to solve a quadratic programming problem (QPP) whose complexity is proportional to the cube of the dataset size. In contrast, the differentiability of huber loss function reduces the sensitivity to noise compared to hinge loss and pinball loss. The differentiability of huber loss function can greatly improve the running speed of SVM algorithm. Inspired by the recently published generalized huber SVM(GHSVM) and regularized multi-task learning(RMTL),we propose a novel multi-task support vector machine with generalized huber loss, named as GHMTSVM. The new method extends the GHSVM for single task learning to the multi-task learning SVM. To our knowledge, the application of the generalized huber loss to MTLSVM is innovative. The proposed method mainly has two advantages: on the one hand, compared with pinball loss and hinge loss, the huber loss is less sensitive to outliers and has better robustness; on the other hand, it adopts functional iterative to find the optimal solution, and does not need to solve QPP, which improves the solving speed. Numerical experiments are carried out on five real datasets and the result is compared with SVM, GHSVM and MTLSVM methods to verify the good performance of the proposed algorithm. Experimental results show that the GHMTSVM outperforms existing single-task learning SVMs and multi-task learning SVM.