BackgroundOver the past few decades, the process and methodology of automatic question generation (AQG) have undergone significant transformations. Recent progress in generative natural language models has opened up new potential in the generation of educational content.ObjectivesThis paper explores the potential of large language models (LLMs) for generating computer science questions that are sufficiently annotated for automatic learner model updates, are fully situated in the context of a particular course and address the cognitive dimension understand.MethodsUnlike previous attempts that might use basic methods such as ChatGPT, our approach involves more targeted strategies such as retrieval‐augmented generation (RAG) to produce contextually relevant and pedagogically meaningful learning objects.Results and ConclusionsOur results show that generating structural, semantic annotations works well. However, this success was not reflected in the case of relational annotations. The quality of the generated questions often did not meet educational standards, highlighting that although LLMs can contribute to the pool of learning materials, their current level of performance requires significant human intervention to refine and validate the generated content.