Queuing networks are used widely in computer simulation studies. Examples of queuing networks can be found in areas such as the supply chains, manufacturing work flow, and internet routing. If the networks are fairly small in size and complexity, it is possible to create discrete event simulations of the networks without incurring significant delays in analyzing the system. However, as the networks grow in size, such analysis can be time consuming, and thus require more expensive parallel processing computers or clusters. We have constructed a set of tools that allow the analyst to simulate queuing networks in parallel, using the fairly inexpensive and commonly available graphics processing units (GPUs) found in most recent computing platforms. We present an analysis of a GPU-based algorithm, describing benefits and issues with the GPU approach. The algorithm clusters events, achieving speedup at the expense of an approximation error which grows as the cluster size increases. We were able to achieve 10-x speedup using our approach with a small error in a specific implementation of a synthetic closed queuing network simulation. This error can be mitigated, based on error analysis trends, obtaining reasonably accurate output statistics. The experimental results of the mobile ad hoc network simulation show that errors occur only in the time-dependent output statistics.