This paper presents the results of a user study in which the impact of sharing decision making in a human-robot team was measured. In the experiments outlined here, a human and robot play a game together in which the robot searches an arena for items, with input from the human, and the human-robot team earns points for finding and correctly identifying the items. The user study reported here involved 60 human subjects. Each subject interacted with two different robots. With one robot, the human acted as a supervisor: the human issued commands and the robot obeyed. With the other robot, the human acted as a collaborator: the human and robot shared decisions and were required to reach agreement about the robot’s actions in the arena before any actions were taken, facilitated using computational argumentation. Objective performance metrics were collected and analyzed for both types of human-robot team, as well subjective feedback from human subjects regarding attitudes toward working with a robot. The objective results showed significant improvement in performance metrics with the human-as-collaborator pairs versus the human-as-supervisor pairs. Subjective results demonstrated significant differences across many subjective measures and indicated a distinct preference for the human-as-collaborator mode. The primary contribution of this work lies in the demonstration and evaluation of a computational argumentation approach to human-robot interaction, particularly in proving the efficacy of this approach over a less autonomous mode of interaction.