“…This research illuminates the consequences of algorithmic in visibility—how algorithmic systems, in this case social media content moderating systems, deny access to visibility and engagement to users with marginalized identities and how those impacted theorize the mechanisms and motivations around this invisibility (Bucher, 2012; Cotter, 2019). It is, however, important to note that visibility can lead to exposure to harm, therefore leaving vulnerable groups experiencing harm from both invisibility and hypervisibility (Dinar, 2021; Díaz & Hecht-Felella, 2021; Marshall, 2021; Siapera, 2022). Counterbalancing the empirically proven algorithmic oppression of historically marginalized identities on digital platforms (Noble, 2018), “algorithmic privilege” has emerged as a framework for understanding users who are “positioned to benefit from how an algorithm operates on the basis of identity” (Karizat et al, 2021, p. 3).…”