This research addresses the escalating concerns surrounding privacy, particularly in the context of safeguarding sensitive medical data within the increasingly demanding healthcare landscape. We undertake an experimental exploration of differentially private federated learning systems, employing three benchmark datasets—PathMNIST, BloodMNIST, and OrganAMNIST—for medical image classification. This study pioneers the application of federated learning with differential privacy in healthcare, closely simulating real-world data distribution across twelve hospitals. Additionally, we introduce a novel deep-learning architecture tailored for differentially private models. Our findings demonstrate the superior performance of federated learning models compared to traditional approaches, with accuracy levels approaching those of non-private settings. By leveraging resilient deep learning models, we aim to enhance privacy, efficiency, and effectiveness in healthcare solutions, benefiting patients, healthcare practitioners, and the overall healthcare system through privacy-protected healthcare.