A contractual mechanism to protect and amplify the interests of Indigenous community well-being in the development of Artificial Intelligence (AI) that affects them is investigated. Our proposal explores the need for a legal mechanism that recognizes the importance of cultural knowledge and ways of being and doing, acknowledging that these can be in tension with the (potentially myopic) goals of AI development. We outline the preconditions for such a legal mechanism to be possible, including some of the core components that could give rise to a termination for cultural misalignment, as well as the supporting types of governance structures and operating principles such a legal mechanism may engender. We discuss how the establishment of such a mechanism in contracts forces procurers of AI technology development services, and therefore developers of AI technology systems themselves, to adopt and enact principles by which they will work to protect and enable community wellbeing, thereby instigating important behavior change. Consideration is given to the types of knowledge, skills and training that would be required to implement such a mechanism successfully. This essay has a particular emphasis on working to ensure Indigenous community well-being in the development of AI, however there are also applications for other communities.