Most researches on human activity recognition do not take into account the temporal localization of actions. In this paper, a new method is designed to model both actions and their temporal domains. This method is based on a new Hough method which outperforms previous published ones on honeybee dataset thanks to a deeper optimization of the Hough variables. Experiments are performed to select skeleton features adapted to this method and relevant to capture human actions. With these features, our pipeline improves stateof-the-art performances on TUM dataset and outperforms baselines on several public datasets.