Up to 6% of the global population is estimated to be affected by one of about 10,000 distinct rare diseases (RDs). RDs are, to this day, often not understood, and thus, patients are heavily underserved. Most RD studies are chronically underfunded, and research faces inherent difficulties in analyzing scarce data. Furthermore, the creation and analysis of representative datasets are often constrained by stringent data protection regulations, such as the EU General Data Protection Regulation. This review examines the potential of federated learning (FL) as a privacy-by-design approach to training machine learning on distributed datasets while ensuring data privacy by maintaining the local patient data and only sharing model parameters, which is particularly beneficial in the context of sensitive data that cannot be collected in a centralized manner. FL enhances model accuracy by leveraging diverse datasets without compromising data privacy. This is particularly relevant in rare diseases, where heterogeneity and small sample sizes impede the development of robust models. FL further has the potential to enable the discovery of novel biomarkers, enhance patient stratification, and facilitate the development of personalized treatment plans. This review illustrates how FL can facilitate large-scale, cross-institutional collaboration, thereby enabling the development of more accurate and generalizable models for improved diagnosis and treatment of rare diseases. However, challenges such as non-independently distributed data and significant computational and bandwidth requirements still need to be addressed. Future research must focus on applying FL technology for rare disease datasets while exploring standardized protocols for cross-border collaborations that can ultimately pave the way for a new era of privacy-preserving and distributed data-driven rare disease research.