Machine learning (ML) rapidly gains increasing interest due to the continuous improvements in performance. ML is used in many different applications to support human users. The representational power of ML models allows solving difficult tasks, while making them impossible to be understood by humans. This provides room for possible errors and limits the full potential of ML, as it cannot be applied in critical environments. In this paper, we propose employing Explainable AI (xAI) for both model and data set refinement, in order to introduce trust and comprehensibility. Model refinement utilizes xAI for providing insights to inner workings of an ML model, for identifying limitations and for deriving potential improvements. Similarly, xAI is used in data set refinement to detect and resolve problems of the training data.