Artificial intelligence (AI) large language models (LLMs) hold great potential to transform psychiatry and mental health care by delivering relevant and tailored mental health information. This study aimed to evaluate the quality of mental health information generated by LLMs by determining their level of accessibility, reliability, and interpreting any bias present. Generative Pre-trained Transformer-4 (GPT-4) (San Francisco, California: Open AI), Gemini 1.5 Flash (Mountain View, California: Google LLC), and Large Language Model Meta AI 3.2 (Llama 3.2) (Menlo Park, California: Meta Inc.) were prompted with 20 questions commonly asked about anxiety and depression. The responses were evaluated using Flesch-Kincaid readability tests to quantify their ease of understanding through grade level and reading ease score measures. The text was subsequently analyzed using a modern DISCERN score to assess the reliability of the health-related information presented. Finally, the LLM responses were evaluated for stigmatizing language using communication and language guidelines for mental health. A significant difference in grade levels and reading ease scores was observed between GPT-4 and Llama (p < 0.01) and Gemini and Llama(p < 0.01), with both GPT-4 and Gemini having higher readability scores. No significant difference was observed in the grade level and reading ease scores between GPT-4 and Gemini (p > 0.05). All three models reported moderate reliability scores but no significant differences were observed (p > 0.05). GPT-4, Gemini, and Llama included stigmatizing phrases in 10%, 15%, and 20% of their responses respectively. These phrases were attributed to descriptions of mental health conditions and substance use; however, no significant differences were observed in the proportions of stigmatizing phrases present across the three models (p > 0.05). Furthermore, comparisons between anxiety and depression responses for each model revealed no significant differences in readability, reliability, or bias. All models demonstrated the ability to generate mental health information that generally satisfied criteria for accessibility, reliability, and minimal bias, with GPT-4 and Gemini reporting higher readability than Llama. However, the lack of additional resources cited in responses and the occasional presence of certain stigmatizing phrases, indicate that additional model training and fine-tuning, specifically for mental health applications, may be necessary before these tools can be deployed on a large scale for mental health care.