“…Such systems can augment the abilities of content moderators of online platforms so that they can intervene and mitigate behaviors that may be deemed inappropriate per the norms of a community (Van Cleemput, Vandebosch, and Pabian 2014). However, recent work (Ziems, Vigfusson, and Morstatter 2020) points out key limitations, such as the lack of publicly available training data and a robust standard for determining ground truth, that have made existing cyberbullying detection algorithms unfit for real-world use. Notably, to date, most research on automated detection of cyberbullying has leveraged third-party annotators or "outsiders" (rather than victims or "insiders") to label training datasets for cyberbullying ground truth, e.g., (Singh, Ghosh, and Jose 2017;Kwak, Blackburn, and Han 2015), which may not be sensitive to the victims' narratives regarding their own experiences.…”