Artificial intelligence (AI) assurance is an umbrella term describing many approaches—such as impact assessment, audit, and certification procedures—used to provide evidence that an AI system is legal, ethical, and technically robust. AI assurance approaches largely focus on two overlapping categories of harms: deployment harms that emerge at, or after, the point of use, and individual harms that directly impact a person as an individual. Current approaches generally overlook upstream collective and societal harms associated with the development of systems, such as resource extraction and processing, exploitative labour practices and energy intensive model training. Thus, the scope of current AI assurance practice is insufficient for ensuring that AI is ethical in a holistic sense, i.e. in ways that are legally permissible, socially acceptable, economically viable and environmentally sustainable. This article addresses this shortcoming by arguing for a broader approach to AI assurance that is sensitive to the full scope of AI development and deployment harms. To do so, the article maps harms related to AI and highlights three examples of harmful practices that occur upstream in the AI supply chain and impact the environment, labour, and data exploitation. It then reviews assurance mechanisms used in adjacent industries to mitigate similar harms, evaluating their strengths, weaknesses, and how effectively they are being applied to AI. Finally, it provides recommendations as to how a broader approach to AI assurance can be implemented to mitigate harms more effectively across the whole AI supply chain.