SummaryWe sought to replicate and expand previous work showing that the more human-like a robot appears, the more willing people are to attribute mind-like capabilities and socially engage with it. Forty-two participants played games against a human, a humanoid robot, a mechanoid robot, and a computer algorithm while undergoing functional neuroimaging. Replicating previous studies, we confirmed that the more human-like the agent, the more participants attributed a mind to them. However, exploratory analyses revealed that beyond humanness, the perceivedsocialnessof an agent appeared to be as important, if not more so, for mind attribution. Our findings suggest that top-down knowledge cues are just as important, if not more so, than bottom-up stimulus cues when exploring mind attribution in non-human agents. While further work is now required to test this hypothesis directly, these preliminary findings hold important implications for robotic design and to understand and test the flexibility of human social cognition when people engage with artificial agents.