The increasing interest in leveraging generative AI models in healthcare necessitates secure infrastructure at academic medical centers. Without an all-encompassing secure system, researchers may create their own insecure microprocesses, risking the exposure of protected health information (PHI) to the public internet or its inadvertent incorporation into AI model training. To address these challenges, our institution implemented a secure pathway to the Azure OpenAI Service using our own private OpenAI instance which we fully control to facilitate high-throughput, secure LLM queries. This pathway ensures data privacy while allowing researchers to harness the capabilities of LLMs for diverse healthcare applications. Our approach supports compliant, efficient, and innovative AI research in healthcare. This paper discusses the implementation, advantages, and use cases of this secure infrastructure, underscoring the critical need for centralized, secure AI solutions in academic medical environments.