Knowledge representation and reasoning (KRR) is a fundamental area in artificial intelligence (AI) research, focusing on encoding world knowledge as logical formulae in ontologies. This formalism enables logic-based AI systems to deduce new insights from existing knowledge. Within KRR, description logics (DLs) are a prominent family of languages to represent knowledge formally. They are decidable fragments of first-order logic, and their models can be visualized as edge- and vertex-labeled directed binary graphs. DLs facilitate various reasoning tasks, including checking the satisfiability of statements and deciding entailment. However, a significant challenge arises in the computation of models of DL ontologies in the context of explaining reasoning results. Although existing algorithms efficiently compute models for reasoning tasks, they usually do not consider aspects of human cognition, leading to models that may be less effective for explanatory purposes. This paper tackles this challenge by proposing an approach to enhance the intelligibility of models of DL ontologies for users. By integrating insights from cognitive science and philosophy, we aim to identify key graph properties that make models more accessible and useful for explanation.