stakeholders from across industry, academia, government, and civil society, and from around the globe, had made concerted efforts to develop standards, policies, and governance mechanisms to ensure the ethical, responsible, and equitable production and use of AI systems.However, as we then show, despite these ostensibly supportive activities and background conditions, several primary drivers of future shock converged to produce an international AI policy and governance crisis in the wake of the dawning of the GenAI era. Such a crisis, we argue, was marked by the disconnect between the strengthening thrust of public concerns about the hazards posed by the hasty industrial scaling of GenAI and the absence of effectual regulatory mechanisms and needed policy interventions to address such hazards. In painting a broad-stroked picture of this crisis, we underscore two sets of contributing factors. First, there have been factors that have demonstrated the absence of various vital aspects of AI policy and governance capability and execution-and thus the absence of key preconditions for readiness and resilience in managing technological transformation. These include prevalent enforcement gaps in existing digital-and data-related laws (e.g., intellectual property and data protection statutes), a lack of regulatory AI capacity, democratic deficits in the production of standards for trustworthy AI, and widespread evasionary tactics of ethic washing and state-enabled deregulation.Second, there have been factors that have significantly contributed to the presence of a new scale and order of systemic-, societal-, and biospheric-level risks and harms. Chief among these were the closely connected dynamics of unprecedented scaling and centralization that emerged as both drivers and by-products of the GenAI revolution. We focus, in particular, on model scaling and industrial scaling. Whereas the scaling of data, model size, and compute were linked to the emergence of serious model intrinsic risks deriving from the unfathomability of training data, model opacity and complexity, emergent model capabilities, and exponentially expanding compute costs, the rapid industrialization of FMs and GenAI systems meant the onset of a new scale of systemic risks that spanned the social, political, economic, cultural, and natural ecosystems in which these systems were embedded. The brute-force commercialization of GenAI ushered in a new age of widespread exposure in which increasing numbers of impacted people and communities at large were made susceptible to the risks and harms issuing from model scaling and to new possibilities for misuse, abuse, and cascading system-level effects.Alongside these aspects of model scaling and industrial scaling, patterns of economic and geopolitical centralization only further intensified conditions of future shock. The steering and momentum of these scaling dynamics lay largely in the hands of a few large tech corporations, which essentially controlled the data, compute, and skills and knowledge infrastructures r...