Most of the existing packet-level simulation tools are designed to perform experiments modeling small to medium scale network nodes. The main reason of this limitation is the amount of available computation power and memory in quasi mono-process simulation environment. To enable efficient packet-level simulation for large scale scenario, we introduce a CPU-GPU co-simulation framework where synchronization and experiment design are performed in CPU and node's logical processes are executed in parallel in GPU according to the master/worker model. The framework is developed using Compute-Unified Device Architecture and denoted as Cunetsim, CUDA network simulator. In this work, we study the node mobility and connectivity as they are among the most time consuming task when large scale network is simulated. Simulation results show that Cunetsim execution time remains stable and that it achieves significantly lower execution time than existing approach when computing mobility and connectivity with no degradation in the accuracy of the results.
In this work, we propose a coordinator-master-worker (CMW) model for medium to extra-large scale network simulation. The model supports distributed and parallel simulation for a heterogeneous computing node architecture with both multicore CPUs and GPUs. The model aims at maximizing the hardware usage rate while reducing the overall management overhead. In the CMW model, the coordinator is the toplevel simulation CPU process that performs an initial partitioning of the simulation into multiple instances and is responsible for load balancing and synchronization services among all the active masters. The master is also a CPU process and provides event scheduling, synchronization, and communication services to the workers. It manages workers operating potentially on different computing resources within the same shared memory context and communicates with the coordinator and others masters through the messages passing interface. The worker is the elementary actor of CMW model that performs the simulation routines and interacts with the input and output data, and can be a CPU or a GPU thread. Compared to existing master-worker models, the CMW is natively parallel and GPU compliant, and can be extended to support additional computing resources. The performance gain of the model is evaluated through different benchmarking scenarios using low-cost publicly available GPU platforms. The results have been shown that the speedup up to 3000 times can be achieved compared to a sequential execution and up to 6 times compared to a mono-GPU MWbased simulation. The hardware activities rate of the CMW services for both CPU and GPU are analyzed in detail.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.