Microaggressions are brief, daily verbal, behavioral, or environmental actions that convey negative, demeaning, or hostile racial undertones. These can be unintentional and often go unnoticed by the offender. They can have significant impacts on the mental health of the victims, leading to stress, low selfesteem, and feelings of invalidation. This research aims to detect microaggressions in written communication using machine learning. The study tackles the problem of data scarcity and lacking annotated data on microaggression, by collecting text data from microagreesions.com, ChatGPT, Reddit and office workplaces, and annotating the data using GPT 3.5 language model. Multiple machine learning algorithms were used to detect microaggressive language in text and were evaluated across proper metrics. Long Short-Term Memory (LSTM) with BERT embeddings was found to be the most stable model in detecting microaggression. It advances the field of microaggression detection with the leveraging of deep learning techniques, which could be potentially expanded to eliminate microaggression in texts.