In many machine learning projects, the lack of an effective monitoring system is a worrying issue. This leads to a series of challenges and risks that compromise the quality, reliability and sustainability of models deployed in production. As Machine Learning gains importance in various fields, poorly implemented monitoring represents a major obstacle to realizing its full potential. This article presents a comprehensive guide of machine learning models monitoring metrics and tool used in the MLOps context. The monitoring of metrics is important to evaluate and validate the performance of a machine-learning model, not only throughout the development phase but also during its deployment in the production environment. It enables real-time data to be collected on various metrics. The purpose of monitoring in MLOps context is to identify potential issues and adjustments made accordingly, guaranteeing consistent model quality and reliability. This article provides a comprehensive guide that introduces and explains a wide range of metrics used for continuous monitoring of ML systems at various stages of the MLOps lifecycle. Additionally, it presents a comparative analysis of available monitoring tools, enabling organizations to optimize their performance and ensure the seamless deployment of their machine learning applications. In essence, it underscores the critical importance of continuous monitoring and tailored metrics for ensuring the success and reliability of machine learning systems.