Automated business processes running on distributed transaction processing (DTP) systems characterize the IT backbone of services industries. New web services standards such as BPEL have increased the importance of DTP systems in business practice. IT departments are forced to meet pre-defined quality-of-service metrics, therefore performance prediction is essential.Unfortunately, the complexity of multiple interacting services running on multiple hardware resources as well as the volatility in the demand for these services can make performance analysis extremely difficult. While business process automation has been a dominant topic in the recent years, surprisingly little has been published on performance modelling of large-scale DTP systems.In this paper, we will describe these systems with respect to the workloads and technical features, and compare the predictive accuracy of different types of queueing models and discrete event simulations experimentally. The experiments are based on two real-world DTP systems and respective data sets of a telecom company. Overall, we found that while the results for average utilization scenarios are quite similar, the effort to implement and run analytic solutions is much lower. As long as standard distributional assumptions of analytical models hold, they provide a reliable and fast methodology to explore different demand mix scenarios even for large-scale systems. The difficulty to estimate service and arrival time parameters and demand mix for the respective queueing network models can largely be reduced with appropriate tooling. Often, this information is missing in IT departments. Also, complex event conditions and error handling in DTP systems can make the analysis difficult. For many DTP applications, however, performance modelling could provide valuable decision support for service level management.