This paper explores Wikipedia bots and problematic information in order to consider implications for cultivating students' critical media literacy. While we recognize the key role of Wikipedia bots in addressing and reducing problematic information (misinformation and disinformation) on the encyclopedia, it is ultimately reductive to construe bots as merely having benign impacts. In order to understand bots and other algorithms as more than just tools, we turn towards a postdigital theorization of these as 'agents' that coproduce knowledge in conjunction with human editors and actors. This paper presents case studies of three specific bots on Wikipedia, including ClueBot NG, AAlertbot, and COIBot, each of which engages in some type of information validation in the encyclopedia. The activities involving these bots, illustrated in these case studies, ultimately support our argument that information validation processes in Wikipedia are complicated by their distribution across multiple human-computer relations and agencies. Despite the programming of these bots for combating problematic information, their efficacy is challenged by social, cultural, and technical issues related to misogyny, systemic bias, and conflict of interest. Studying the function of Wikipedia bots makes space for extending educational models for critical media literacy. In the postdigital era of problematic information, students should be on the alert for how the human and the nonhuman, the digital and the nondigital, interfere and exert agency in Wikipedia's complex and highly volatile processes of information validation.