Even without speech recognition errors, robots may face difficulties interpreting natural-language instructions. We present a method for robustly handling miscommunication between people and robots in task-oriented spoken dialogue. This capability is implemented in TeamTalk, a conversational interface to robots that supports detection and recovery from the situated grounding problems of referential ambiguity and impossible actions. We introduce a representation that detects these problems and a nearest-neighbor learning algorithm that selects recovery strategies for a virtual robot. When the robot encounters a grounding problem, it looks back on its interaction history to consider how it resolved similar situations. The learning method is trained initially on crowdsourced data but is then supplemented by interactions from a longitudinal user study in which six participants performed navigation tasks with the robot. We compare results collected using a general model to user-specific models and find that user-specific models perform best on measures of dialogue efficiency, while the general model yields the highest agreement with human judges. Our overall contribution is a novel approach to detecting and recovering from miscommunication in dialogue by including situated context, namely, information from a robot's path planner and surroundings. CCS Concepts: • Computing methodologies → Discourse, dialogue and pragmatics; • Humancentered computing → Natural language interfaces;