Bias is a key issue in expert and public discussions about Artificial Intelligence (AI). While some hope that AI will help to eliminate human bias, others are concerned that AI will exacerbate it. To highlight political and power aspects of bias in AI, this contribution examines so far largely overlooked topic of framing of bias in AI policy. Among diverse approaches of diagnosing problems and suggesting prescriptions, we can distinguish two stylized framings of bias in AI policy—one more technical, another more social. Powerful technical framing suggests that AI can be a solution to human bias and can help to detect and eliminate it. It is challenged by an alternative social framing, which emphasizes the importance of social contexts, balance of power and structural inequalities. Technological frame sees simple technological fix as a way to deal with bias in AI. For the social frame, we suggest to approach bias in AI as a complex wicked problem, for which a broader strategy is needed involving diverse stakeholders and actions. The social framing of bias in AI considerably expands the legitimate understanding of bias and the scope of potential actions beyond technological fix. We argue that, in the context of AI policy, intersectional bias should not be perceived as a niche issue but rather be seen as a key to radically reimagine AI governance, power and politics in more participatory and inclusive ways.