Organizations employ data mining to discover patterns in historic data in order to learn predictive models. Depending on the predictive model the predictions may be more or less accurate, raising the question about the reliability of individual predictions. This paper proposes a reference process aligned with the CRISP-DM to enable the assessment of reliability of individual predictions obtained from a predictive model. The reference process describes activities along the different stages of the development process required to establish a reliability assessment approach for a predictive model. The paper then presents in more detail two specific approaches for reliability assessment: perturbation of input cases and local quality measures. Furthermore, this paper describes elements of a knowledge graph to capture important metadata about the development process and training data. The knowledge graph serves to properly configure and employ the reliability assessment approaches.