In Model-Driven Engineering (MDE) model transformation languages are used to describe important operations on models.Such domain-specific languages are specially developed to describe transformation rules, according to which an output model should be generated from an input model. In comparison to these domain-specific languages, techniques to analyze and improve the performance of programs written in a general-purpose language, such as Java or C, are well known. However, are such techniques also needed for model transformation languages? Problem. Since these languages are only used in certain domains; the first question is whether performance is at all relevant for model transformations and whether techniques similar to those used to analyze and improve the performance of general-purpose languages are needed. Research in the performance of model transformations focuses mainly on comparing the performance of different languages or different definition styles or optimizing the engine that executes the transformation. However, it is not clear to what extent these efforts can mitigate or prevent performance issues, and there is also a lack of studies that examine to what extend the performance of transformations is relevant. Method. In order to close this gap and to answer the initial question about the relevance of performance, we conducted an online survey. For this purpose, we developed a questionnaire and identified 649 authors as potential participants based on a Systematic Literature Review (SLR) on a selection of model transformation languages. Additionally, we were able to acquire four further potential participants by advertising our study. In total, 84 participants took part in our survey. We used statistical tests such as Kendall's τ c , the Kruskal-Wallis-Test and the Mann-Whitney-U-Test to evaluate our hypotheses on relevant factors for the performance of model transformations. Results. The results show that specific performance is desired and that there is a willingness to improve performance. In this regard, we identified a need for insights necessary to better understand how a transformation is performed in order to be able to improve its performance. Furthermore, we investigated with the help of hypotheses tests the possible influencing factors that cause participants to try to analyze or improve the performance of model transformations. The main results of the hypotheses tests are that the satisfaction with the execution time, the size of the models used, the relevance of whether a specific execution time is not exceeded in the average case, and the knowledge of how a transformation engine executes a transformation are relevant factors.