When learning sign language, feedback on accuracy is critical to vocabulary acquisition. When designing technologies to provide such visual feedback, we need to research effective ways to identify errors and present meaningful and effective feedback to learners. Motion capture technologies provide new opportunities to enhance sign language learning experiences through the presentation of visual feedback that indicates the accuracy of the signs made by learners. We designed, developed, and evaluated an embodied agent-based system for learning the location and gross motor movements of sign language vocabulary. The system presents a sign, tracks the learner's attempts at a sign, and provides visual feedback to the learner on their errors. We compared five different types of visual feedback, and in a study involving 51 participants we established that learners preferred visual feedback where their attempts at a sign were shown concurrently with the movements of the instructor with or without explicit corrections. CCS CONCEPTS • Human-centered computing → Usability testing; Empirical studies in accessibility.