Graphics Processing Units (GPUs), originally developed for computer graphics, are now commonly used to accelerate parallel applications. Given that GPUs are designed to be as efficient as possible, evaluating their performance is crucial. This problem has been tackled in the last years by researchers that started to propose solutions such as analytical models and digital simulators, which are, however, often complex to use and/or to adapt to the needs of the user. Thanks to its high flexibility, model-based analysis is widely used to evaluate systems' properties, including performance. Researchers started working on developing GPU models that can represent both their architecture and the software in execution, but they often use strong assumptions that undermine their usability. In this work we develop a Stochastic Activity Network model to evaluate the performance of CUDA applications running on NVIDIA GPUs. The model takes as input a representation of the program's instruction, parsed from the CUDA SASS assembly file, and a list of parameters to offer configurability to the user. We tune our model to match the architecture of two different NVIDIA GPUs and simulate the execution of a CUDA program. We then compare the results with those obtained from the execution of the program over the real GPUs.