Classification systems based Machine Learning hide the logic of their internal decision processes from the users. Hence, post-hoc explanations about their predictions are often required. This paper proposes Fuzzy-LORE, a method that generates local explanations for fuzzy-based Machine Learning systems. First, it learns a local fuzzy decision tree using a set of synthetic neighbours from the input instance. Then, it extracts from the logic of the fuzzy decision tree a meaningful explanation consisting of a set of decision rules (which explain the reasons behind the decision), a set of counterfactual rules (which inform of small changes in the instance’s features that would lead to a different outcome), and finally a set of specific counterfactual examples. Our experiments on a real-world medical dataset show that Fuzzy-LORE outperforms prior approaches and methods for generating local explanations.