Models are used mainly to communicate among humans the most relevant aspects of the item being modelled. Moreover, for achieving impact in modern complex applications, modelling languages and tools must support some level of composition. Furthermore, executable models are the foundations of model-driven development; therefore, it is crucial that we study the understandability of executable models of behaviour, especially from the perspective of modular composition. We consider the match between the delicate semantics of executable models for applications such as reactive-systems and real-time systems and the usually simple understanding of developers. By performing a series of experiments with UML statecharts and logic-labelled finite-state machines (LLFSMs) we explore understandability of event-driven versus logic-labelled state machines as well as the architectural options for modular composition. We find that expertise in manipulation of the models is essential, and that clarification of the semantics for logic-labelled finite state machines is necessary for LLFSMs to remain formally verifiable and suitable for robotic and embedded systems.