Explaining the decisions of complex machine learning models is becoming a necessity in many areas where trust in ML models decision is essential to their accreditation / adoption by the experts of the domain. The ability to explain models decisions comes also with an added value since it allows to provide diagnosis in addition to the model decision. This is highly valuable in such scenarios as fault/abnormality detection. Unfortunately, high-performance models do not exhibit the necessary transparency to make their decisions fully understandable. And the black-boxes approaches, which are used to explain such model decisions, suffer from a lack of accuracy in tracing back the exact cause of a model decision regarding a given input. Indeed, they do not have the ability to explicitly describe the decision regions of the model around that input, which would be necessary to exactly say what influences the model towards one decision or the other. We thus asked ourselves the question: is there a category of high-performance models among the ones commonly used for which we could explicitly and exactly characterise the decision regions in the input feature space using a geometrical characterisation? Surprisingly we came out with a positive answer for any model that enters the category of tree ensemble models, which encompasses a wide range of models dedicated to massive heterogeneous industrial data processing such as XGBoost, Catboost, Lightgbm, random forests... For these models, we could derive an exact geometrical characterisation of the decision regions under the form of a collection of multidimensional intervals. This characterisation makes it straightforward to compute the optimal counterfactual (CF) example associated with a query point, as well as the geometrical characterisation of the entire decision region of the model containing the optimal CF example. We also demonstrate other possibilities of the approach such as computing the CF example based only on a subset of features, and fixing the values of variables on which the user has no control. This allows in general to obtain more plausible explanations by integrating some prior knowledge about the problem.A straightforward adaptation of the method to counterfactual reasoning on regression problems is also envisaged.