2002
DOI: 10.1145/643114.643115
|View full text |Cite
|
Sign up to set email alerts
|

On the processor scheduling problem in time warp synchronization

Abstract: Time Warp is a synchronization mechanism for parallel/distributed simulation. It allows logical processes (LPs) to execute events without the guarantee of a causally consistent execution. Upon the detection of a causality violation, rollback procedures recover the state of the simulation to a correct value. When a rollback occurs there are two primary sources of performance loss: (1) CPU time must be spent for the execution of the rollback procedures and (2) waste of CPU time arises from the invalidation of ev… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0

Year Published

2007
2007
2017
2017

Publication Types

Select...
3
3
1

Relationship

5
2

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 23 publications
0
9
0
Order By: Relevance
“…Among several proposals [22], the common choice is represented by the Lowest-Timestamp-First (LTF) algorithm [15], which selects the LP whose pending next-event has the minimum timestamp, compared to pending nextevents of the other LPs hosted by the same kernel.…”
Section: Optimistic Simulation Overviewmentioning
confidence: 99%
“…Among several proposals [22], the common choice is represented by the Lowest-Timestamp-First (LTF) algorithm [15], which selects the LP whose pending next-event has the minimum timestamp, compared to pending nextevents of the other LPs hosted by the same kernel.…”
Section: Optimistic Simulation Overviewmentioning
confidence: 99%
“…This approach is not able to promptly react to the (system wide) dynamic generation and injection of events with higher priority, say lower timestamps, compared to the one currently being processed by some CPU-core. Consequently, it is not fully optimized given that the generation of rollbacks, and the associated waste of computation, tends to increase when events are CPU-dispatched and processed according to a rule that does not fully fit the priorities associated with the dynamic generation of timestamped events [Quaglia and Cortellessa 2002]. We note that the reduction of rollback incidence cannot be fully tackled by solely relying on load balancing/sharing strategies (see, e.g., [Carothers and Fujimoto 2000;Choe and Tropper 1999;Glazer and Tropper 1993;Vitali et al 2012]) since they operate as long term planners for fruitful CPU usage, thus being not suited for "prompt" response to punctual variations of the event priorities along time.…”
Section: Introductionmentioning
confidence: 99%
“…As a consequence, the platform-level software is not allowed to re-evaluate CPU assignment until the completion of the last-dispatched simulation event. Therefore, it is not able to CPU-dispatch any other simulation event that may have been produced in the system, which may have a higher priority (e.g., a lower timestamp) compared to the one currently being processed by the CPU [22].…”
Section: Introductionmentioning
confidence: 99%