Interacting with a social robot should give people a better understanding of the robot's actions and intentions. In terms of human-human interaction (HHI), people can interpret actions of others in an effortless way. However, it is still unclear whether people can do the same with humanoid robots. Imitation (of the robot's actions) provides us with an intuitive means for solving this puzzle, because it is closely related to interpreting another's actions. In the study of human imitation, the theory of goal-directed imitation holds that the imitator tends to imitate the action goals whenever the action goals are salient, otherwise people tend to imitate the action means. We investigated this theory for human robot interaction by manipulating the presence and absence of the goal object when people imitate the robot's pointing gestures. The results showed that the presence of a goal object reduces people's goal errors. Moreover, we found that most people tend to match their action means rather than the goals of the robot's action. To ensure that participants considered the robot as a social agent, we designed a natural interaction task where the turn-taking cue was included: we let the robot look at the human after completing the pointing gesture at varying latencies. As expected, we found that the earlier the robot gazes at people, the shorter the reaction time of people was to start to imitate. Our results show that people are responsive to a robot's social gaze cues, and that they are responsive to the action goals of robots, although not as much as in HHI.