Detecting online hate is a complex task, and low-performing models have harmful consequences when used for sensitive applications such as content moderation. Emoji-based hate is an emerging challenge for automated detection. We present HATEMOJICHECK, a test suite of 3,930 short-form statements that allows us to evaluate performance on hateful language expressed with emoji. Using the test suite, we expose weaknesses in existing hate detection models. To address these weaknesses, we create the HATEMOJIBUILD dataset using a human-andmodel-in-the-loop approach. Models built with these 5,912 adversarial examples perform substantially better at detecting emoji-based hate, while retaining strong performance on text-only hate. Both HATEMOJICHECK and HATEMOJI-BUILD are made publicly available. 1 Content Warning This article contains examples of hateful language from HATEMOJICHECK to illustrate its composition. Examples are quoted verbatim, except for slurs and profanity in text, for which the first vowel is replaced with an asterisk. The authors oppose the use of hateful language.