In this paper, we analyze a performance model for the TCP Congestion Avoidance algorithm. The model predicts the bandwidth of a sustained TCP connection subjected to light to moderate packet losses, such as loss caused by network congestion. It assumes that TCP avoids retransmission timeouts and always has sucient receiver window and sender data. The model predicts the Congestion Avoidance performance of nearly all TCP implementations under restricted conditions and of TCP with Selective A c knowledgements over a much wider range of Internet conditions.We v erify the model through both simulation and live I n ternet measurements. The simulations test several TCP implementations under a range of loss conditions and in environments with both drop-tail and RED queuing. The model is also compared to live I nternet measurements using the TReno diagnostic and real TCP implementations.We also present s e v eral applications of the model to problems of bandwidth allocation in the Internet. We use the model to analyze networks with multiple congested gateways this analysis shows strong agreement with prior work in this area. Finally, w e present several important implications about the behavior of the Internet in the presence of high load from diverse user communities.
This paper gives, in the form of Laplace–Stieltjes transforms and generating functions, the joint distribution of the sojourn time and the number of customers in the system at departure for customers in the general M/G/1 queue with processor sharing (M/G/1/PS).
Explicit formulas are given for a number of conditional and unconditional moments, including the variance of the sojourn time of an ‘arbitrary' customer.
Dynamic load balancing in a system of loosely-coupled homogeneous processors may employ both judicious initial placement of processes and migration of existing processes to processors with fewer resident processes. In order to predict the possible benefits of these dynamic assignment techniques, we analyzed the behavior (CPU, disk, and memory use) of 9.5 million Unix* processes during normal use. The observed process behavior was then used to drive simulation studies of particular dynamic assignment heuristics.
Let
F
(·) be the probability distribution of the amount of CPU time used by an arbitrary process. In the environment studied we found:
(1-
F
(
x
)) ≉
rx
-
c
, 1.05<
c
<1.25.
F
(·) is far enough from exponential to make exponential models of little use.
With a foreground-background process scheduling policy in each processor, simple heuristics for initial placement and processor migration can significantly improve the response ratios of processes that demand exceptional amounts of CPU, without harming the response ratios of ordinary processes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.