Uses of pejorative expressions can be benign or actively empowering. When models for abuse detection misclassify these expressions as derogatory, they inadvertently censor productive conversations held by marginalized groups. One way to engage with nondominant perspectives is to add context around conversations. Previous research has leveraged user-and thread-level features, but it often neglects the spaces within which productive conversations take place. Our paper highlights how community context can improve classification outcomes in abusive language detection. We make two main contributions to this end. First, we demonstrate that online communities cluster by the nature of their support towards victims of abuse. Second, we establish how community context improves accuracy and reduces the false positive rates of state-of-the-art abusive language classifiers. These findings suggest a promising direction for context-aware models in abusive language research.Productive conversations containing slurs are common, and they take many forms (Hom, 2008). Research inspired by the #MeToo movement has focused on the detection of sexual harassment disclosures by victims (Deal et al., 2020), but this research has not been sufficiently integrated into the literature on abusive language detection. The distinction between actual sexist messages and messages calling out sexism is rarely addressed in the field (Chiril et al., 2020). A similar trend is seen with sarcasm. Humor and selfirony can be employed as coping mechanisms by victims of abuse (Garrick, 2006), yet they constitute frequent sources of error for state-of-theart classifiers (Vidgen et al., 2019). For example, the median toxicity score for language on transgendercirclejerk, a "parody [subreddit] for trans people", is as high as 90% (Kurrek