2011
DOI: 10.1587/transcom.e94.b.2199
|View full text |Cite
|
Sign up to set email alerts
|

High-Resolution Timer-Based Packet Pacing Mechanism on the Linux Operating System

Abstract: Abstract-Packet pacing is a well-known technique for reducing the short-time-scale burstiness of traffic, and software-based packet pacing has been categorized into two approaches: a timer interrupt-based approach and a gap packet-based approach. The former was hard to implement for Gigabit class networks because it requires the operating system to maintain a microsecond resolution timer per stream, thus incurring a large overhead. On the other hand, a gap packet-based packet pacing mechanism achieves precise … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 7 publications
0
7
0
Order By: Relevance
“…Implementing Hrtimer We adopt the mechanism used in [72] and implement a Qdisc that transmits packets by raising qdisc watchdog timer interrupts. The data structures used in the Qdisc are illustrated in Fig.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Implementing Hrtimer We adopt the mechanism used in [72] and implement a Qdisc that transmits packets by raising qdisc watchdog timer interrupts. The data structures used in the Qdisc are illustrated in Fig.…”
Section: Methodsmentioning
confidence: 99%
“…TRC-TCP [68] and [71] realize inter-packet gaps in the transport layer. The former [68] Pspacer/HT [72] and the Fair Queue (FQ) Qdisc [70] follow another strategy-they create gaps after the protocol stack processing. They implement a Qdisc, which shares a data structure with the protocol stack.…”
Section: Hrtimer-based Interruptsmentioning
confidence: 99%
“…In the experiments using ImageNet, we used nodes of the same quality on both the client side and the Cloud side; however, we used only the CPU on the client side and the GPGPU on the Cloud side. We used PSPacer [7] to control the network bandwidth to represent various sensor and Cloud network environments.…”
Section: Machine Performancementioning
confidence: 99%
“…In exp2), as shown in Table 2, the number of Spout threads is set to two, the number of Bolt threads is set to 16, the inter-arrival time of image data is set to 10 ms/tuple, and the network bandwidth between sensors and cloud varies 10, 50, 100, and 1000 Mbps. We use PSPacer [9] for network bandwidth control. The constructed Storm cluster is shown in Figure 4.…”
Section: Overview Of Experimentsmentioning
confidence: 99%