Over the years, several frameworks have been proposed in the domain of Explainable AI (XAI), however their practical applicability and utility need to be clarified. The neighbourhood contexts are shown to significantly impact the explanations generated by XAI frameworks, thus directly affecting their utility in addressing specific questions, or "explananda". This work introduces a methodology that use a comprehensive range of neighbourhood contexts to evaluate and enhance the utility of specific XAI techniques, particularly Feature Importance and CounterFactuals. In this evaluation, two explananda are targeted. The first one examines whether features' collection should be halted as per the AI model based on the sufficiency of the current set of information. Here, the information refers to the features present in the data used to train the AI-based system. The second one explores what is the most effective information (features) that should be collected next to ensure that the AI outputs the same classification as it would have generated with all the information present. These questions serve as a platform to demonstrate our methodology's ability to assess the impact of customised neighbourhood contexts on the utility of XAI.