Human rights concerns in relation to the impacts brought forth by artificial intelligence (‘AI’) have revolved around examining how it affects specific rights, such as the right to privacy, non-discrimination and freedom of expression. However, this article argues that the effects go deeper, potentially challenging the foundational assumptions of key concepts and normative justifications of the human rights framework. To unpack this, the article applies the lens of ‘slow violence’, a term borrowed from environmental justice literature, to frame the grinding, gradual, attritional harms of AI towards the human rights framework.The article examines the slow violence of AI towards human rights at three different levels. First, the individual as the subject of interest and protection within the human rights framework, is increasingly unable to understand nor seek accountability for harms arising from the deployment of AI systems. This undermines the key premise of the framework which was meant to empower the individual in addressing large power disparities and calling for accountability towards such abuse of power. Secondly, the ‘slow violence’ of AI is also seen through the unravelling of the normative justifications of discrete rights such as the right to privacy, freedom of expression and freedom of thought, upending the reasons and assumptions in which those rights were formulated and formalised in the first place. Finally, the article examines how even the wide interpretations towards the normative foundation of human rights, namely human dignity, is unable to address putative new challenges AI poses towards the concept. It then considers and offers the outline to critical perspectives that can inform a new model of human rights accountability in the age of AI.