Phishing email attacks are among the most common and most harmful cybersecurity attacks. With the emergence of generative AI, phishing attacks can be based on emails generated automatically, making it more difficult to detect them. That is, instead of a single email format sent to a large number of recipients, generative AI can be used to send each potential victim a different email, making it more difficult for cybersecurity systems to identify the scam email before it reaches the recipient. Here, we describe a corpus of AI-generated phishing emails. We also use different machine learning tools to test the ability of automatic text analysis to identify AI-generated phishing emails. The results are encouraging, and show that machine learning tools can identify an AI-generated phishing email with high accuracy compared to regular emails or human-generated scam emails. By applying descriptive analytics, the specific differences between AI-generated emails and manually crafted scam emails are profiled and show that AI-generated emails are different in their style from human-generated phishing email scams. Therefore, automatic identification tools can be used as a warning for the user. The paper also describes the corpus of AI-generated phishing emails that are made open to the public and can be used for consequent studies. While the ability of machine learning to detect AI-generated phishing emails is encouraging, AI-generated phishing emails are different from regular phishing emails, and therefore, it is important to train machine learning systems also with AI-generated emails in order to repel future phishing attacks that are powered by generative AI.