Explainability has become crucial in Artificial Intelligence studies and, as the complexity of the model increases, so does the complexity of its explanation. However, the higher the complexity of the problem, the higher the amount of information it may provide, and this information can be exploited to generate a more precise explanation of how the model works. One of the most valuable ways to recover such input-output relation is to extract counterfactual explanations that allow us to find minimal changes from an observation to another one belonging to a different class. In this article, we propose a novel methodology to extract multiple counterfactual explanations (MUCH, MUlti Counterfactual via Halton sampling) from an original Multi-Class Support Vector Data Description algorithm. To evaluate the performance of the proposed method, we extracted a set of counterfactual explanations from three state-of-the-art datasets achieving satisfactory results that pave the way to a range of real-world applications.Impact Statement-When a system is analyzed by Artificial Intelligence, the inherent models are posed to the attention of domain experts, thus delegating further possible actions. Counterfactual explanations, on the other hand, directly suggest actuation on the system. Counterfactual control still remains under experts' supervision, but the system improves its level of autonomy. The long-term goal is to make the AI model aware of how to affect the environment properly (both in terms of performance and safety). Examples may include: manoeuvering of autonomous cars, clinical diagnosis, and finance. The proposed approach generalizes counterfactuals intelligibility and control to the multi-class case. The validation over practical scenarios (e.g., the FIFA dataset) corroborates both control precision and quality of counterfactual explanations, thus increasing the readiness level of the approach.