While there is not doubt that social signals affect human reinforcement learning, there is still no consensus about their exact computational implementation. To address this issue, we compared three hypotheses about the algorithmic implementation of imitation in human reinforcement learning. A first hypothesis, decision biasing, postulates that imitation consists in transiently biasing the learner's action selection without affecting her value function. According to the second hypothesis, model-based imitation, the learner infers the demonstrator's value function through inverse reinforcement learning and uses it for action selection. Finally, according to the third hypothesis, value shaping, demonstrator's actions directly affect the learner's value function. We tested these three psychologically plausible hypotheses in two separate experiments (N = 24 and N = 44) featuring a new variant of a social reinforcement learning task, where we manipulated the quantity and the quality of the demonstrator's choices. We show through model comparison that value shaping is favored, which provides a new perspective on how imitation is integrated into human reinforcement learning.
Reinforcement Learning | Social Learning | Imitation | Computational cognitive modeling | Decision-making | Meta-learningCorrespondence: anis.najar@ens.fr, stefano.palminteri@ens.fr