2019
DOI: 10.1145/3291048
|View full text |Cite
|
Sign up to set email alerts
|

A Survey on Agent-based Simulation Using Hardware Accelerators

Abstract: Due to decelerating gains in single-core CPU performance, computationally expensive simulations are increasingly executed on highly parallel hardware platforms. Agent-based simulations, where simulated entities act with a certain degree of autonomy, frequently provide ample opportunities for parallelisation. Thus, a vast variety of approaches proposed in the literature demonstrated considerable performance gains using hardware platforms such as many-core CPUs and GPUs, merged CPU-GPU chips as well as FPGAs. Ty… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
26
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
2

Relationship

5
3

Authors

Journals

citations
Cited by 39 publications
(26 citation statements)
references
References 161 publications
0
26
0
Order By: Relevance
“…To implement the two-state access graph, two sets of state variables are used, 1 and 2 . In contrast, the independent and ordered updates 4, agent or can be processed at the same time, e.g., making the update scheme suitable for execution on parallel computing platforms, e.g., GPUs or FPGAs [39,40].…”
Section: Cyclic Dependent Agentsmentioning
confidence: 99%
“…To implement the two-state access graph, two sets of state variables are used, 1 and 2 . In contrast, the independent and ordered updates 4, agent or can be processed at the same time, e.g., making the update scheme suitable for execution on parallel computing platforms, e.g., GPUs or FPGAs [39,40].…”
Section: Cyclic Dependent Agentsmentioning
confidence: 99%
“…Some recent research aims at integrating differentiable programming facilities into machine learning frameworks such as Py-Torch [41]. Implementing differentiable agent-based models within such frameworks would enable an efficient unification of simulationbased optimization and neural network training, making use of the frameworks' optimized GPU-based implementations of neural networks and automatic differentiation while accelerating the model execution through fine-grained many-core parallelism [64,65].…”
Section: Limitations and Research Directionsmentioning
confidence: 99%
“…A wide variety of dedicated hardware accelerators have been developed to accelerate a wide range of general computing functions, e.g., simulations [208] and graph processing [209]. To the best of our knowledge, there is no prior survey of dedicated hardware accelerators for NFs.…”
Section: E Dedicated Acceleratorsmentioning
confidence: 99%