We present an approach for detecting potentially unsafe commands in human-robot dialog, where a robotic system evaluates task cost in input commands to ask inputspecific, directed questions to ensure safe task execution. The goal is to reduce risk, both to the robot and the environment, by asking context-appropriate questions. Given an input program, (i.e., a sequence of commands) the system evaluates a set of likely alternate programs along with their likelihood and cost, and these are given as input to a Decision Function to decide whether to execute the task or confirm the plan from the human partner. A process called token-risk grounding identifies the costly commands in the programs, and specifically asks the human user to clarify those commands. We evaluate our system in two simulated robot tasks, and also on-board the Willow Garage PR2 and TurtleBot robots in an indoor task setting. In both sets of evaluations, the results show that the system is able to identify specific commands that contribute to high task cost, and present users the option to either confirm or modify those commands. In addition to ensuring task safety, this results in an overall reduction in robot reprogramming time.