2018
DOI: 10.1504/ijaose.2018.094373
|View full text |Cite
|
Sign up to set email alerts
|

Quantitative analysis of multi-agent systems through statistical verification of simulation traces

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…It is challenging to determine whether a conclusion drawn from an agent-based model, such as whether a particular behaviour is likely to emerge, is robust or just reflects random chance or the absence of some critical features of the simulated phenomena. Miles and colleagues have developed methods and tools for quantifying whether an observed property can confidently be asserted about the modelled system, drawing on approximate probabilistic model checking and temporal logic [45]. They extended this general approach to allow causal processes within a model to be detected [44].…”
Section: Main Approaches and Key Resultsmentioning
confidence: 99%
“…It is challenging to determine whether a conclusion drawn from an agent-based model, such as whether a particular behaviour is likely to emerge, is robust or just reflects random chance or the absence of some critical features of the simulated phenomena. Miles and colleagues have developed methods and tools for quantifying whether an observed property can confidently be asserted about the modelled system, drawing on approximate probabilistic model checking and temporal logic [45]. They extended this general approach to allow causal processes within a model to be detected [44].…”
Section: Main Approaches and Key Resultsmentioning
confidence: 99%
“…This is impossible where simulations are manually developed from domain models. In generating simulation experiment models, the automated generator can take into consideration expected boundaries for statistical significance and choose appropriate sensitivity analyses for robustness checking—for example by building on tools such as [ 20 ] or Spartan [ 5 ]. Again, the generation rules are explicit artefacts that can be referenced from the fitness-for-purpose argument and inspected as needed.…”
Section: Overview Of Visionmentioning
confidence: 99%
“…For example, Spartan [ 5 ] is a toolkit supporting statistical analysis of simulation runs to alleviate aleatory uncertainty and undertake sensitivity analysis, focusing on numerical output data. Similarly, [ 20 ] provides support for drawing statistically sound conclusions based on temporal-logic queries about patterns of agent behaviour. However, neither of these approaches are currently integrated with the initial domain model; they remain at the platform-specific level.…”
Section: Related Workmentioning
confidence: 99%
“…On the first day of the workshop, participants presented their perspectives on the state of the art in simulation and simulation tools, introducing their existing work. On the application side, this included work using simulation in immunology (e.g., [1,7,11,13,15,17]), vascular biology (e.g., [4,5]), and synthetic biology [12], as well as in the social sciences [8] and, more generally, the evaluation of simulation results (e.g., [9,10]). Equally, Cosmo Tech, Slingshot Simulations, and the FLAME GPU Team presented on different approaches to high-performance simulation platforms that allow for domain specialization.…”
Section: Overview Of Workhop and Participantsmentioning
confidence: 99%