Content moderation comes with trade‐offs and moral dilemmas, particularly for transnational platforms governing borderline content where the boundaries of acceptability are subject to debate. While extensive research has explored the legality and legitimacy of platformised speech governance in established democracies, few address the complexities of less‐than‐democratic developing nations. Through sociolegal analysis and controversy mapping of TikTok's localised moderation in South and Southeast Asia, the study examines how major actors negotiate the shifting boundaries of online speech. The analysis reveals that neither the platform nor regional states are well‐positioned to achieve sound governance of borderline content. Primarily, TikTok localises its moderation based on pragmatic necessity rather than moral obligations, deliberately sidestepping contentious political controversies. Governments demonstrate determined efforts to control online discourse, leveraging legal uncertainty to further political agendas. Thus, local content governance often relies on vague rationales around securitisation and morality. The contradictory motives of (de)politicising borderline moderation seemingly counterbalance each other, yet in practice lead to an accountability vacuum where legitimate interests are sidelined. Given the lack of normative common ground, ensuring procedural justice and encouraging civic participation are essential to counteract rhetoric that rationalises imposition of certain speech norms hinging on imbalanced political power.