Abstract-Scenarios are an established means to specify requirements for software systems. Scenario-based tests allow for validating software models against such requirements. In this paper, we consider three alternative notations to define such scenario tests on structural models: a semi-structured natural-language notation, a diagrammatic notation, and a fully-structured textual notation. In particular, we performed a study to understand how these three notations compare to each other with respect to accuracy and effort of comprehending scenario-test definitions, as well as with respect to the detection of errors in the models under test. 20 software professionals (software engineers, testers, researchers) participated in a controlled experiment based on six different comprehension and maintenance tasks. For each of these tasks, questions on a scenario-test definition and on a model under test had to be answered. In an ex-post questionnaire, the participants rated each notation on a number of dimensions (e.g., practicality or scalability). Our results show that the choice of a specific scenario-test notation can affect the productivity (in terms of correctness and time-effort) when testing software models for requirements conformance. In particular, the participants of our study spent comparatively less time and completed the tasks more accurately when using the natural-language notation compared to the other two notations. Moreover, the participants of our study explicitly expressed their preference for the natural-language notation.