Feature importance techniques offer valuable insights into machine learning (ML) models by conducting quantitative assessments of the individual contributions of variables to the model’s predictive outcomes. This quantification differs across various explanation methods and multiple almost equally accurate models (Rashomon models), creating explanation and model multiplicities. This resulted in a novel framework called method agnostic model class reliance range (MAMCR) for identifying a unified explanation across methods for multiple models. This consensus explanation provides each feature’s importance range for a class of models. Using state-of-the-art feature importance methods, experiments on popular machine learning datasets are conducted with a $$\varepsilon -$$
ε
-
threshold value of 0.1. The dataset-specific Rashomon set with 200 models, and the prediction accuracy of concerned reference models ($$m^*$$
m
∗
) have produced encouraging results in obtaining a consensus model reliance explanation that is consistent across multiple methods. The experiment results ensure whether the prediction accuracy level of models has an impact on the importance range estimation of features. Also, the order of features suggested by MAMCR leads to better performance of models consistently in all the experimented datasets, than the state-of-the-art methods.