Responding to the rapid roll-out and large-scale commercialization of foundation models, large language models, and generative AI, an emerging body of work is shedding light on the myriad impacts these technologies are having across society. Such research is expansive, ranging from the production of discriminatory, fake and toxic outputs, and privacy and copyright violations, to the unjust extraction of labor and natural resources. The same has not been the case in some of the most prominent AI governance initiatives in the global north like the UK's AI Safety Summit and the G7's Hiroshima process, which have influenced much of the international dialogue around AI governance. Despite the wealth of cautionary tales and evidence of algorithmic harm, there has been an ongoing over-emphasis within the AI governance discourse on technical matters of safety and global catastrophic or existential risks. This narrowed focus has tended to draw attention away from very pressing social and ethical challenges posed by the current brute-force industrialization of AI applications. To address such a visibility gap between real-world consequences and speculative risks, this paper offers a critical framework to account for the social, political, and environmental dimensions of foundation models and generative AI. Drawing on a review of the literature on the harms and risks of * Equal contribution as lead authors † Also with The Alan Turing Institute.