Concept maps (CMs) are tools for visualizing relationships between ideas, facilitating more effective comprehension and learning. However, the automatic generation of CMs from unstructured text presents a challenge, often requiring semantic markup and subsequent complex processing. This paper introduces a novel approach to address this hurdle by harnessing the capabilities of fine-tuned Large Language Models (LLMs). Our innovative methodology uses these models to extract structured propositions from unstructured text, subsequently serving as the foundation for constructing a CM. This process reverses the transformation of CM relations into first-order logic propositions, a concept explored in our previous work. To achieve this, we train the LLM using fine-tuning techniques, leveraging the latest advancements in artificial intelligence and machine learning. We evaluate our proposed solution based on precision and recall metrics, comparing our outcomes against models crafted by experts. Notably, the results indicate that our method can contribute significantly to advancements in the automatic generation of CMs, illustrating another application bolstered by recent breakthroughs in artificial intelligence. As a stepping stone in this promising direction, future research should continue to refine the model and explore potential applications across diverse domains.