Decisions made by various artificial intelligence (AI) systems greatly influence our day-to-day lives. With the increasing use of AI systems, it becomes crucial to know that they are fair, identify the underlying biases in their decision-making, and create a standardized framework to ascertain their fairness. Biases in AI systems lead to unintended ethical, social and even legal issues. In this paper, we propose a novel and versatile Fairness Score and Bias Index for measuring the fairness of a supervised learning AI system. We also propose a standard operating procedure (SOP) for issuing Fairness Certification for such data-driven applications. Fairness Score and audit process standardization will ensure quality, reduce ambiguity, enable comparison and improve the trustworthiness of the AI systems. It will also provide a framework to operationalize the concept of fairness and facilitate the commercial deployment of such systems. Furthermore, a Fairness Certificate issued by a designated third-party auditing agency following the standardized process would boost the conviction of the organizations in the AI systems that they intend to deploy. The Bias Index proposed in this paper reveals comparative bias amongst the various protected attributes within the dataset, and the Fairness Score provides overall fairness with an ideal value of 1. To substantiate the proposed framework, we iteratively train a model on biased and unbiased data using multiple labelled datasets and check that the Fairness Score, Bias Indexes and the proposed process correctly identify the biases and judge the fairness.