Artificial intelligence systems, particularly those involving sophisticated neural network architectures like ChatGPT, have demonstrated remarkable capabilities in generating human-like text. However, the susceptibility of these systems to malicious prompt injections poses significant risks, necessitating comprehensive evaluations of their safety and robustness. The study presents a novel automated framework for systematically injecting and analyzing malicious prompts to assess the vulnerabilities of ChatGPT. Results indicate substantial rates of harmful responses across various scenarios, highlighting critical areas for improvement in model defenses. The findings underscore the importance of advanced adversarial training, real-time monitoring, and interdisciplinary collaboration to enhance the ethical deployment of AI systems. Recommendations for future research emphasize the need for robust safety mechanisms and transparent model operations to mitigate the risks associated with adversarial inputs.