Federated Learning (FL) is concept that has been adopted in medical field to analyze data in individual devices through aggregation of machine learning model in global server. It also provides data privacy being that the sampled devices are not allowed to share data among themselves. Therefore, it minimizes computation costs and privacy risks to some extent compared to conventional methods of machine learning. However, federation learning provides a different use case in health as compared to other sectors. Preservation of patients’ sensitive information such as electronic health record (EHR) when sharing data among different medical practitioners is of greatest concern. So the question is, how should FL techniques be structured in the current clinical environment where heterogeneity is the order of the day? The EU’s General Data Protection Regulation (GDPR) and Health Insurance Portability and Accountability Act of 1996 (HIPPA) regulations recommends health providers to gain authorizations from patients before sharing their private data for medical analytical progression. This leads to some bottlenecks in clinical analysis. Although attempts have been made to address some of the challenges, privacy, performance, implementation, computation and adversaries still pose some threats. This paper provides a comprehensive review that covers literature, mathematical notations, architecture, process flow, challenges and frameworks used to implement FL with respect to healthcare. Possible solutions on how to address privacy challenges in accordance with HIPPA act and GDPR is discussed. Finally, the study gives future direction of FL in clinical health and a list of practical tools to conduct analysis on patients’ data.