Background. Unintended biases introduced by optimization and machine learning (ML) models are of great interest to medical professionals. Bias in healthcare decisions can cause patients from vulnerable populations (e.g., racially minoritized, low-income) to have lower access to resources, exacerbating societal unfairness. Purpose. This review aims to identify, describe, and categorize literature regarding bias types, fairness metrics, and bias mitigation methods in healthcare decision making. Data Sources. Google Scholar database was searched to identify published studies. Study Selection. Eligible studies were required to present 1) types of bias 2) fairness metrics and 3) bias mitigation methods within decision-making in healthcare. Data Extraction. Studies were classified according to the three themes mentioned in the Study Selection. Information was extracted concerning the definitions, examples, applications, and limitations of bias types, fairness metrics, and bias mitigation methods. Data Synthesis. In bias type section, we included studies (n=15) concerning different biases. In the fairness metric section, we included studies (n=6) regarding common fairness metrics. In bias mitigation method section, themes included pre-processing methods (n=5), in-processing methods (n=16), and post-processing methods (n=4). Limitations. Most examples in our survey are from the United States since the majority of studies included in this survey were conducted in the United States. In the meanwhile, we limited the search language to English, so we may not capture some meaningful articles in other languages. Conclusions. Several types of bias, fairness metrics, and bias mitigation methods (especially optimization and machine learning-based methods) were identified in this review, with common themes based on analytical approaches. We also found topics such as explainability, fairness metric selection, and integration of prediction and optimization are promising directions for future studies.