Background & Objective: This survey paper examines the application of Federated Learning (FL) and secure Multiparty Computing (MPC) considering medical data privacy, further provides an overview of FL and MPC techniques and discusses their strengths and weaknesses. This also covers other techniques like Homomorphic Encryption, data masking, differential privacy for its efficiency, and limitations. Methods: Eligibility Criteria: For this survey PRISMA[1] framework was employed and popular electronic databases like IEEE, Computers and Security, Bioinformatics, and Google Scholar were scanned along with government websites and research web pages. Papers published between 2018 and 2023 and written in English are considered for the survey. This survey further provides direction towards the future research and potential challenges in the deployment of these techniques at a scale. Results: The search was restricted for the period between 2018 and 2023. Initially, ~100 papers were shortlisted, and after a though review of each paper, finally selected ~35 for this work. However, papers from earlier years are also included as they are found to be relevant for this study. Fig4 below describes the article selection process. Apart from the research papers, government websites are also referred for information on various laws, regulations, and compliance for the privacy of the patient's information. Acts like HIPPA[7], GDPR[8], PDP[9] are reviewed thoroughly to ensure all the aspects are covered to present this survey. Conclusion: The information used in the data mining process comes from data, and such data include personal data about people. Since knowledge is derived from data, the main goal of privacy-preserving data mining is the development of algorithms to conceal or protect certain sensitive information so that it cannot be disclosed to unintended users, hackers, or intruders. Various techniques like homomorphic encryption, differential privacy, data masking, data aggregation, multiparty computation is studied, one single technique may not be sufficient to provide an end-to-end solution. In the future work, we would like to explore on multiple techniques to build a secured ML model.