In fluent speech, moments of acoustic prominence are tightly coordinated with peaks in the movement profile of hand gestures (e.g., speed of a gesture). This gesture-speech coordination has been found to operate on continuous bidirectional feedback of upper-limb movement. Here, we investigated the gesture-speech coordination of a person with deafferentation, the well-studied case of IW. Although IW has lost both his primary source of information about body position (i.e., proprioception) and touch, his gesture-speech coordination has been reported to be largely unaffected temporally and semantically, even if his vision is blocked. This is surprising as, without vision, his object-directed actions (e.g., grasping a cup) almost completely break down. Given differences in control in IW of gestures vs. actions when vision is unavailable, it has been suggested that communicative gesture operates under separate neural-cognitive constraints. In the current kinematic-acoustic study, we reanalyzed the classic 1998 and 2002 gesture experiments with IW (McNeill, 2005). Extending previous findings, we show that the micro-scale gesture-speech synchrony is compromised when vision is blocked, despite macro-scale coherence. Finally, we show a biomechanical linkage which could explain why IW’s gesture-speech capabilities in the absence of visual information is only mildly compromised.