In modern industrial systems, collected textual data accumulates over time, offering an important source of information for enhancing present and future industrial practices. Although many AI-based solutions have been developed in the literature for a domain-specific information retrieval (IR) from this data, the explainability of these systems was rarely investigated in such domain-specific environments. In addition to considering the domain requirements within an explainable intelligent IR, transferring the explainable IR algorithm to other domains remains an open-ended challenge. This is due to the high costs, which are associated with intensive customization and required knowledge modelling, when developing new explainable solutions for each industrial domain. In this article, we present a transferable framework for generating domain-specific explanations for intelligent IR systems. The aim of our work is to provide a comprehensive approach for constructing explainable IR and recommendation algorithms, which are capable of adopting to domain requirements and are usable in multiple domains at the same time. Our method utilizes knowledge graphs (KG) for modeling the domain knowledge. The KG provides a solid foundation for developing intelligent IR solutions. Utilizing the same KG, we develop graph-based components for generating textual and visual explanations of the retrieved information, taking into account the domain requirements and supporting the transferability to other domain-specific environments, through the structured approach. The use of the KG resulted in minimum-to-zero adjustments when creating explanations for multiple intelligent IR algorithms in multiple domains. We test our method within two different use cases, a semiconductor manufacturing centered use case and a job-to-applicant matching one. Our quantitative results show a high capability of our approach to generate high-level explanations for the end users. In addition, the developed explanation components were highly adaptable to both industrial domains without sacrificing the overall accuracy of the intelligent IR algorithm. Furthermore, a qualitative user-study was conducted. We recorded a high level of acceptance from the users, who reported an enhanced overall experience with the explainable IR system.