Sensory feedback is critical in fine motor control, learning, and adaptation. However, robotic prosthetic limbs currently lack the feedback segment of the communication loop between user and device. Sensory substitution feedback can close this gap, but sometimes this improvement only persists when users cannot see their prosthesis, suggesting the provided feedback is redundant with vision. Thus, given the choice, users rely on vision over artificial feedback. To effectively augment vision, sensory feedback must provide information that vision cannot provide or provides poorly. Although vision is known to be less precise at estimating speed than position, no work has compared speed precision of biomimetic arm movements. In this study, we investigated the uncertainty of visual speed estimates as defined by different virtual arm movements. We found that uncertainty was greatest for visual estimates of joint speeds, compared to absolute rotational or linear endpoint speeds. Furthermore, this uncertainty increased when the joint reference frame speed varied over time, potentially caused by an overestimation of joint speed. Finally, we demonstrate a joint-based sensory substitution feedback paradigm capable of significantly reducing joint speed uncertainty when paired with vision. Ultimately, this work may lead to improved prosthesis control and capacity for motor learning.