As the number of users increases on social media each year, the number of posts that are made rises gradually. This is relevant for posts with negative characters including hate speech, misinformation, explicit material, or cyberbullying that influences terribly on users’ experience. This paper puts emphasis on content moderation with LLMs to avoid issues with bias, transparency, free speech, and accountability. Several experiments were conducted with pre-trained models to identify efficiency and arising ethical concerns while moderating posted data. Our findings reveal that LLMs demonstrate bias during the moderation of content from different demographics and minority communities. One of the most significant challenges found was the lack of transparency in the LLM's decision-making process. Despite the ethical concerns, the LLM demonstrated efficiency in processing large volumes of content, and this significantly reduced the time required to flag potentially harmful posts. This research highlights the need for a balanced approach to protecting freedom of speech while ensuring the ethical and responsible use of NLP on online platforms.