While comprehensive knowledge networks can be instrumental in finding solutions to complex problems or supporting the development of detailed simulation models, their large number of nodes and edges can become a hindrance. When the representation of a network becomes opaque, they stop fulfilling their role as a shared representation of a system between participants and modelers; hence, participants are less engaged in the model-building process. Combating the information overload created by large conceptual models is not merely a matter of changing formats: shifting from an unwieldy diagram to enormous amounts of text does not promote engagement. Rather, we posit that participants need an environment that provides details on demand and where interactions with a model rely primarily on a familiar format (i.e., text). In this study, we developed a visual analytics environment where linked visualizations allow participants to interact with large conceptual models, as shown in a case study with hundreds of nodes and almost a thousand relationships. Our environment leverages several advances in generative AI to automatically transform (i) a conceptual model into detailed paragraphs, (ii) detailed text into an executive summary of a model, (iii) prompts about the model into a safe version that avoids sensitive topics, and (iv) a description of the model into a complementary illustration. By releasing our work open source along with a video of our case study, we encourage other modelers to use this approach with their participants. Their feedback and future usability studies are key to respond to the needs of participants by improving our environment given individual preferences, models, and application domains.