In the field of human-agent interaction, increasing agents' trustworthiness, inelligence perceived and users' motivation to continue interacting are important issues.Prior research has attempted to solve this problem by having "agents meet user expectations".In this paper, we attempted to show that, on the contrary, "agents contradicting user expectations" can increase agents' trustworthiness and users' motivation to continue dialogue.In this study, we conducted an experiment using an "association game" task.In this task, participants were given three hints and answered what those hints have in common.The correct answer is then given, along with a fourth hint.In the first condition, the "expected" condition, the correct answer was the one that the majority of participants associate with.In the second condition, the "unexpected" condition, the correct answer is different from the answer that the majority of participants associate with.After this task, participants answered the question about the extent to which they felt the agents' trustworthiness, intelligence and motivation to continue the interaction using a 7-point Likert scale.The experiment was conducted on the web.The results of the experiment showed that agents' trustworthiness and users' motivation to continue the interaction were significantly higher in the "unexpected" condition than in the "expected" condition.This result indicates that "the agent contradicts the user's expectations" can increase the agent's intelligence perceived and the user's motivation to continue the interaction.The results show the effectiveness of a new method in human-agent interaction that has not received much attention.