The proliferation of images generated by artificial intelligence (AI) has significantly impacted the digital landscape, especially on social media platforms where the distinction between natural and synthetic content is increasingly blurred. This study embarks on a comparative review of the strategies used by major social media platforms-Facebook/Instagram, Twitter, TikTok, and YouTube-to detect AI-generated images. Employing a comprehensive methodology that includes a systematic review of academic literature, analysis of platform policies, and expert interviews, this research assesses the effectiveness of various detection methods, ranging from sophisticated AI tools to user reporting mechanisms. The findings reveal diverse approaches: Facebook and Instagram utilise a blend of AI detection and human moderation; Twitter integrates machine learning algorithms with user reports; TikTok emphasises AI tools within moderation workflows and educational initiatives; and YouTube relies on its Content ID system alongside AI analysis. The study highlights the critical role of effective detection systems in maintaining content authenticity and user trust, underscoring the importance of balancing automated detection with human oversight. The ongoing development and refinement of these technologies, alongside collaborative efforts and evolving regulatory frameworks, are identified as essential for ensuring a trustworthy digital environment. This research contributes to the discourse on digital integrity, offering insights into the complexities of safeguarding social media ecosystems against the challenges posed by AI-generated content.