Human-Robot Collaboration in industrial context requires a smooth, natural and efficient coordination between robot and human operators. The approach we propose to achieve this goal is to use online recognition of technical gestures. In this paper, we present together, and analyze, parameterize and evaluate much more thoroughly, three findings previously unveiled separately by us in several conference presentations: 1/ we show on a real prototype that multi-users continuous real-time recognition of technical gestures on an assembly-line is feasible (≈ 90% recall and precision in our case-study), using only non-intrusive sensors (depth-camera with a top-view, plus inertial sensors placed on tools); 2/ we formulate an end-to-end methodology for designing and developing such a system; 3/ we propose a method for adapting to new users our gesture recognition. Furthermore we present here two new findings: 1/ by comparing recognition performances using several sets of features, we highlight the importance of choosing features that focus on the effective part of gestures, i.e. usually hands movements; 2/ we obtain new results suggesting that enriching a multiusers training set can lead to higher precision than using a separate training dataset for each operator.