This study presents a comprehensive analysis of the application of Large Language Models (LLMs), specifically ChatGPT and Claude, in the context of ransomware negotiation. Ransomware, an increasingly prevalent and sophisticated cyber threat, necessitates innovative response strategies. This study examines the capabilities of these LLMs in simulating human-like negotiation tactics against ransomware attacks, focusing on two main types: cryptographic and data exfiltration ransomware. Through a series of controlled simulations, the efficacy of ChatGPT and Claude in understanding complex language constructs, formulating negotiation strategies, and their adaptability to varying ransomware scenarios is evaluated. The research highlights the strengths of these models in response accuracy, adaptability, and psychological manipulation resistance. However, it also reveals their susceptibility to producing hallucinations — instances of unrealistic or inaccurate responses. The study contributes to the understanding of AI's potential in cybersecurity, emphasizing the need for improvements in AI reliability, ethical considerations, and the integration of human oversight. The findings suggest that while LLMs hold promising potential in enhancing cyber defense mechanisms, their deployment in high-stakes scenarios like ransomware negotiations must be approached with caution and continuous oversight.