2003
DOI: 10.3141/1831-09
|View full text |Cite
|
Sign up to set email alerts
|

Systematic Investigation of Variability due to Random Simulation Error in an Activity-Based Microsimulation Forecasting Model

Abstract: A key difference between stochastic microsimulation models and more traditional forms of travel demand forecasting models is that micro-simulation-based forecasts change each time the sequence of random numbers used to simulate choices is varied. To address practitioners’ concerns about this variation, a common approach is to run the microsimulation model several times and average the results. The question then becomes: What is the minimum number of runs required to reach a true average state for a given set o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
22
0

Year Published

2005
2005
2020
2020

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 32 publications
(22 citation statements)
references
References 2 publications
0
22
0
Order By: Relevance
“…Specifically, several operational analytic frameworks within the activity analysis paradigm have been formulated, and some metropolitan areas have even implemented these frameworks (Waddell et al, 2002 andCastiglione et al, 2003).…”
Section: Introductionmentioning
confidence: 99%
“…Specifically, several operational analytic frameworks within the activity analysis paradigm have been formulated, and some metropolitan areas have even implemented these frameworks (Waddell et al, 2002 andCastiglione et al, 2003).…”
Section: Introductionmentioning
confidence: 99%
“…The microsimulation run time increases linearly with the size of the synthetic population. As elaborated further below, the size of the synthetic population and the number of runs that are required is based on the statistic of interest for the analysis (21). For urban-wide statistics (for example, total VMT), one run using 20% population appears to be sufficient.…”
Section: Computation Time and Disk Spacementioning
confidence: 99%
“…The advantage of sampling error (as opposed to aggregation error) is that one can estimate the size of the error and generate confidence intervals. (21) presented a thorough analysis of sampling error resulting from microsimulation runs of the San Francisco model. They used a full synthetic population, and show that the magnitude of the sampling error varies based on the characteristics of the statistic of interest.…”
Section: Understanding Simulation Errormentioning
confidence: 99%
“…The question then becomes: what is the minimum number of runs required to reach a stable result (i.e., with a certain level of confidence that the obtained average value can only vary within an acceptable interval)? In this respect, several relevant studies have Benekohal and Abu-Lebdeh (1994), Hale (1997), Veldhuisen et al (2000b, Esser andNagel (2001), Vovsha et al (2002), Castiglione et al (2003), Ziems et al…”
Section: Introductionmentioning
confidence: 99%
“…(2011), Horni et al (2011), and Cools et al (2011). In particular, Castiglione et al (2003) investigated the extent of random variability in the San Francisco model (a micro-simulation model system) by running the model 100 times at three levels of geographic detail, namely zone level, neighborhood level, and county-wide level. The analysis was then conducted by showing how quickly the mean values of output variables such as the number of trips per person converge towards the final mean value (after 100 runs) as the number of simulation runs increases.…”
Section: Introductionmentioning
confidence: 99%