Humans rely heavily on social learning to navigate the social and physical world. For the first time in history, we are interacting in online social networks where content algorithms filter social information, yet little is known about how these algorithms influence our social learning. In this review, we synthesize emerging insights into this ‘algorithm-mediated social learning’ and propose a framework that examines its consequences in terms of functional misalignment. We argue that the functions of human social learning and the goals of content algorithms are misaligned in practice. Algorithms exploit basic human social learning biases (i.e., a bias toward PRestigious, Ingroup, Moral and Emotional information, or PRIME information) as a side effect of their goals to sustain attention and maximize engagement on platforms. Social learning biases function to promote adaptive behaviors that foster cooperation and collective problem-solving. However, when social learning biases are exploited by algorithms, PRIME information becomes amplified in the digital social environment in ways that can stimulate conflict and spread misinformation. We show how this problem is ultimately driven by human-algorithm interactions where observational and reinforcement learning exacerbate algorithmic amplification, and how it may even escalate to impact cultural evolution. Finally, we discuss practical solutions for reducing functional misalignment in human-algorithm interactions via strategies that help algorithms promote more diverse and contextually-sensitive information environments.