For simulators to provide effective mission training, there must be capabilities for scenario selection, briefing, debriefing, and performance evaluation; typically provided by instructors. This paper describes an approach for automated scenario selection, guided brief and AAR, and automated performance measurement. Underlying these intelligent functions is a tight coupling of training objectives, performance measures, and scenario events derived through extensive cognitive task analysis. Training objectives drive scenario selection and instantiation with events that provide opportunities to demonstrate related competencies; automated performance measures then monitor and evaluate user actions in response to these events; and are in turn stored and aggregated for use in a guided after-action review (AAR). A cognitive agent conducts the AAR in much the same way as a real instructor would, tailoring it to the user's performance, selecting the most important performance measures to discuss, highlighting aspects of importance to the mission, and summarizing the user's performance. Since each performance measure is related to a training objective, the user's training profile can be updated for each training objective after the scenario, and subsequent scenarios can be selected to focus on training objectives that have not been achieved, beginning the cycle anew.