This paper describes how to automatically detect potential automation surprises in interactive systems, within a rapid automation interface design tool named ADEPT. The proposed analysis method in this paper is based on a conformance relation, called full-control, between the model of the actual system and a mental model of it, that is, its behavior as perceived by the operator. The method can, among other things, automatically generate a so-called minimal full-control mental model for a given system. Systems are well designed if they can be described by relatively simple mental models for their operators, which can be assessed with the minimal full-control mental model generation algorithms. During the generation, potential automation surprises are detected and highlighted with execution examples that may lead to confusion. The analysis methods are based on an enriched version of labeled transition systems to describe the system and mental models. In order to be able to integrate the analysis method within ADEPT, a semantics for ADEPT models makes it possible to translate them into enriched LTSs. The proposed translation is automated for a specified class of ADEPT models that are characterized and defined in this paper. A case study demonstrates the proposed analysis framework and informs how the integration with ADEPT can be improved.Index Terms-ADEPT toolset, formal methods, human factors, human-machine interaction.