In this paper, we present a descriptor for human whole-body actions based on motion coordination. We exploit the principle, well known in neuromechanics, that humans move their joints in a coordinated fashion. Our coordination-based descriptor (CODE) is computed by two main steps. The first step is to identify the most informative joints which characterize the motion. The second step enriches the descriptor considering minimum and maximum joint velocities and the correlations between the most informative joints. In order to compute the distances between action descriptors, we propose a novel correlation-based similarity measure. The performance of CODE is tested on two public datasets, namely HDM05 and Berkeley MHAD, and compared with stateof-the-art approaches, showing recognition results.Index Terms-Gesture, human factors and human-in-the-loop, human-centered automation, posture and facial expressions.