This paper investigates the performance of massively multilingual neural machine translation (NMT) systems in translating Yorùbá greetings (ε kú 1 ), which are a big part of Yorùbá language and culture, into English. To evaluate these models, we present IkiniYorùbá, a Yorùbá-English translation dataset containing some Yorùbá greetings, and sample use cases. We analysed the performance of different multilingual NMT systems including Google Translate and NLLB and show that these models struggle to accurately translate Yorùbá greetings into English. In addition, we trained a Yorùbá-English model by finetuning an existing NMT model on the training split of IkiniYorùbá and this achieved better performance when compared to the pre-trained multilingual NMT models, although they were trained on a large volume of data. * Equal contribution. 1 For simplicity of notation in the title, we make use of εthe Beninese Yorùbá letter representation of E . (which is used in Nigeria), and provides the context of greeting.Source: E . kú ojúmó . , e . sì kú déédé àsìkò yìí. Target: Good morning and compliment for this period.NLLB: You have died, and you have died to this hour. Google Translate: Die every day, and die at this time. Our Model: Good morning and compliment for this time.