Interfaces based on mid-air gestures often use a one-to-one mapping between gestures and commands, but most remain very basic. Actually, people exhibit inherent intrinsic variations for their gesture articulations because gestures carry dependency with both the person producing them and the specific context, social or cultural, in which they are being produced. We advocate that allowing applications to map many gestures to one command is a key step to give more flexibility, avoid penalizations, and lead to better user interaction experiences. Accordingly, this paper presents our results on mid-air gesture variability. We are mainly concerned with understanding variability in mid-air gesture articulations from a pure user-centric perspective. We describe a comprehensive investigation on how users vary the production of gestures under unconstrained articulation conditions. The conducted user study consisted in two tasks. The first one provides a model of user conception and production of gestures; from this study we also derive an embodied taxonomy of gestures. This taxonomy is used as a basis for the second experiment, in which we perform a fine grain quantitative analysis of gesture articulation variability. Based on these results, we discuss implications for gesture interface designs.