Context: Previous studies have indicated that the stability of Just-In-Time Software Defect Prediction (JIT-SDP) models can change over time due to various factors, including modifications in code, environment, and other variables. This phenomenon is commonly referred to as Concept Drift (CD), which can lead to a decline in model performance over time. As a result, it is essential to monitor the model performance and data distribution over time to identify any fluctuations. Objective: We aim to identify CD points on unlabeled input data in order to address performance instability issues in evolving software and investigate the compatibility of these proposed methods with methods based on labeled input data. To accomplish this, we considered the chronological order of the input commits generated by developers over time. In this study, we propose several methods that monitor the distance between model interpretation vectors and values of their individual features over time to identify significant distances for detecting CD points. We compared these methods with various baseline methods. Method: In this study, we utilized a publicly available dataset that has been developed over the long-term and comprises 20 open-source projects. Given the real-world scenarios, we also considered verification latency. Our initial idea involved identifying CD points on within-project by discovering significant distances between consecutive vectors of interpretation of incremental and non-incremental models. Results: We compared the performance of the proposed CD Detection (CDD) methods to various baseline methods that utilized incremental Naïve Bayes classification. These baseline methods are based on monitoring the error rate of various performance measures. We evaluated the proposed approaches using well-known measures of CDD methods such as accuracy, missed detection rate, mean time to detection, mean time between false alarms, and meantime ratio. Our evaluation was conducted using the Friedman statistical test. Conclusions: According to the results obtained, it appears that method based on the average interpretation vector does not accurately recognize CD. Additionally, methods that rely on incremental classifiers have the lowest accuracy. On the other hand, methods based on non-incremental learning that utilized interpretation with positive effect size demonstrate the highest accuracy. By employing strategies that utilized the interpretation values of each feature, we were able to derive features that have the most positive effect in identifying CD.