Artificial Intelligence (AI) has the potential to influence people’s lives in various ways as it is increasingly integrated into important decision-making processes in key areas of society. While AI offers opportunities, it is also associated with risks. These risks have sparked debates about how AI should be regulated, whether through government regulation or industry self-regulation. AI-related risk perceptions can be shaped by national cultures, especially the cultural dimension of uncertainty avoidance. This raises the question of whether people in countries with higher levels of uncertainty avoidance might have different preferences regarding AI regulation than those with lower levels of uncertainty avoidance. Therefore, using Hofstede’s uncertainty avoidance scale and data from ten European countries (N = 7.855), this study investigates the relationships between uncertainty avoidance, people’s AI risk perceptions, and their regulatory preferences. The findings show that people in countries with higher levels of uncertainty avoidance are more likely to perceive AI risks in terms of a lack of accountability and responsibility. While people’s perceived AI risk of a lack of accountability exclusively drives their preferences for government regulation of AI, the perceived AI risk of a lack of responsibility can foster people’s requests for government regulation and/or industry self-regulation. This study contributes to a better understanding of which mechanisms shape people’s preferences for AI regulation.