Social media such as Facebook, Instagram, and Twitter are powerful and essential platforms where people express and share their ideas, knowledge, talents, and abilities with others. Users on social media also share harmful content, such as targeting gender, religion, race, and trolling. These posts may be in the form of tweets, videos, images, and memes. A meme is one of the mediums on social media which has an image and embedded text in it. These memes convey various views, including fun or offensiveness, that may be a personal attack, hate speech, or racial abuse. Such posts need to be filtered out immediately from social media. This paper presents a framework that detects offensive text in memes and prevents such nuisance from being posted on social media, using the collected KAU-Memes dataset 2582. The latter combines the "2016 U.S. Election" dataset with the newly generated memes from a series of offensive and non-offensive tweets datasets. In fact, this model uses the KAU-Memes dataset containing symbolic images and the corresponding text to validate the proposed model. We compare the performance of three proposed deep-learning algorithms to train and detect offensive text in memes. To the best of the authors knowledge and literature review, this is the first approach based on You Only Look Once (YOLO) for offensive text detection in memes. This framework uses YOLOv4, YOLOv5, and SSD MobileNetV2 to compare the model's performance on the newly labeled KAU-Memes dataset. The results show that the proposed model achieved 81.74%, 84.1%, mAP, and F1-score, respectively, for SSD-MobileNet V2, and 85.20%, 84.0%, mAP, and F1-score, respectively for YOLOv4. YOLOv5 had the best performance and achieved the highest possible mAP, F1-score, precision, and recall which are 88.50%, 88.8%, 90.2%, and 87.5%, respectively, for YOLOv5.