Deep learning models, despite their potential, often function as "black boxes", posing significant challenges to interpretability, particularly in sensitive fields such as healthcare and finance. Addressing this issue, we introduce a novel, human-understandable metric aimed at enhancing the interpretability of local interpretable model-agnostic explanations (LIME). Distinct from previous methodologies, this metric is designed to assess the shift in classification probability upon the removal of features (words), thereby providing a unique insight into interpretability. We deploy a convolutional neural network (CNN) for sentiment analysis, interpret predictions utilizing LIME, and evaluate these explanations using three distinct metrics: our proposed metric, a conventional model-based metric, and human evaluations. Through rigorous validation, our metric demonstrated high recall performance, a key indicator of relevant instance retrieval. Results showed worst-case and best-case recalls of 80.29% and 98.19% respectively, against a logistic regression metric for "good" and "excellent" classifications. Comparisons with human evaluations using single-word explanations revealed worst-case and best-case recalls of 82.03% and 94.37%, respectively. These high recall values highlight our metric's effectiveness in aligning with both human judgments and model-based metrics, emphasizing its capacity to capture essential explainability aspects. Furthermore, our study also outlines certain LIME limitations, setting the stage for future interpretability-focused AI research.