In the setting of underdiagnosed and undertreated perinatal depression (PD), Artificial intelligence (AI) solutions are poised to help predict and treat PD. In the near future, perinatal patients may interact with AI during clinical decision-making, in their patient portals, or through AI-powered chatbots delivering psychotherapy. The increase in potential AI applications has led to discussions regarding responsible AI and explainable AI (XAI). Current discussions of RAI, however, are limited in their consideration of the patient as an active participant with AI. Therefore, we propose a patient-centered, rather than a patient-adjacent, approach to RAI and XAI, that identifies autonomy, beneficence, justice, trust, privacy, and transparency as core concepts to uphold for health professionals and patients. We present empirical evidence that these principles are strongly valued by patients. We further suggest possible design solutions that uphold these principles and acknowledge the pressing need for further research about practical applications to uphold these principles.