To date, the detection of hate speech is still primarily carried out by humans, yet there is great potential for combining human expertise with automated approaches. However, identified challenges include low levels of agreement between humans and machines due to the algorithms' missing expertise of, e.g., cultural, and social structures. In this work, a design science approach is used to derive design knowledge and develop an artifact, through which humans are integrated in the process of detecting and evaluating hate speech. For this purpose, explainable artificial intelligence (XAI) is utilized: the artifact will provide explanative information, why the deep learning model predicted whether a text contains hate. Results show that the instantiated design knowledge in form of a dashboard is perceived as valuable and that XAI features increase the perception of the artifact's usefulness, ease of use, trustworthiness as well as the intention to use it.