In this article, we introduce a fast, accurate and invariant method for RGB-D based human action recognition using a Hierarchical Kinematic Covariance (HKC) descriptor.Recently, non singular covariance matrices of pattern features which are elements of the space of Symmetric Definite Positive (SPD) matrices, have been proven to be very efficient descriptors in the field of pattern recognition.However, in the case of action recognition, singular covariance matrices cannot be avoided because the dimension of features could be higher than the number of samples. Such covariance matrices (non singular and singular) belong to the space of Symmetric Positive semi-Definite (SPsD) matrices.Thus, in order to classify actions, we propose to adapt kernel methods such as Support Vector Machines (SVM) and Multiple Kernel Learning (MKL) to the space of SPsD matrices by using a perturbed Log-Euclidean distance (Arsigny et al., 2006). The mathematical validity of this perturbed distance (called Modified Log-Euclidean distance) for SPsD is therefore studied.The offline experiments are conducted on three challenging benchmarks, namely MSRAction3D, UTKinect and Multiview3D datasets. A fair comparison demonstrates that our approach competes with state-of-the-art methods in terms of accuracy and computational latency. Finally, our method is extended to an online scenario and experiments on MSRC12 prove the efficiency of this extension. (Enjie Ghorbel) can be found in (Poppe, 2010;Weinland et al., 2011). Unfortunately, these methods suffer from some limitations: their performance is negatively affected by occlusions, view-point variation, illumination changes and body segmentation.With the availability of low-cost RGB-D cameras, a renewed interest for action recognition has been observed. Additionally to the classical RGB images, this kind of camera also provides depth images. Furthermore, the recent algorithm proposed by Shotton et al. (2013) allows the real-time human skeleton extraction from depth maps. Thus, new methods (Rahmani et al., 2016;Amor et al., 2016;Brun et al., 2016;Liu et al., 2016) have been pro-