Social media companies have learned the hard way that poor moderation of content in languages other than English can have grave consequences. Leaving harmful content up, particularly in regions where social media platforms are primary news and communication channels, has fueled conspiracy theories, violence, and even genocide around the world (Fink 2018; Iyengar 2018). American social media companies have long faced criticism for underinvesting in regions outside the US and Europe. In response, Meta, Google, and others have begun to deploy new multilingual language models that they claim can effectively detect and take action on harmful content in dozens if not hundreds of languages (Jigsaw 2021; Meta AI 2021b).In previous work, we have argued that these models have critical shortcomings that limit their ability to perform highly language-and context-specific tasks, such as content moderation (Nicholas and Bhatia 2023b). One shortcoming is that these multilingual language models are trained predominantly on English-language text, which leads them to apply an Anglocentric lens onto their analysis of texts from non-English linguistic and cultural contexts. This is due in large part to what natural language processing (NLP) researchers call the "resourcedness gap," or the gap between the quantity, quality, and diversity of training data available in English and every other language (Joshi et al. 2020). In low-resource languages there are few, if any, high-quality examples of digitized text, which hampers developers of these models from training and evaluating models on high-quality examples of speech in those languages.The resourcedness gap exists due to many factors, including British colonialism, which has driven mass production of English-language text, and the hegemony of American technology companies, which has further contributed to English as the language of the internet and digital exchange. This gap may widen if funders and those with the mandate for public interest do not intervene (Nicholas and Bhatia 2023a). However, Western technology companies are not financially incentivized to close this gap, and global academic institutions also tend to prioritize and privilege research and development of technologies in English at the expense of other, lower-resource languages (Bender 2019). Without key investments, the current incentive structures will continue to perpetuate the preferential treatment of the English language in computer science and, in turn, in automated trust and safety systems. This commentary argues that what we now face is a free-rider problem, where technology companies, academic institutions, and the public at large would benefit greatly from increased investments into low-resource language development, but no one actor is currently incentivized to do so. But with a few modest investments, governments, grantmaking organizations, and social media companies can