Proceedings 16th International Parallel and Distributed Processing Symposium 2002
DOI: 10.1109/ipdps.2002.1015527
|View full text |Cite
|
Sign up to set email alerts
|

The end-to-end performance effects of parallel TCP sockets on a lossy wide-area network

Abstract: IntroductionThere are considerable efforts within the Grid and high performance computing communities to improve end-to-end network performance for applications that require substantial amounts of network bandwidth. The Atlas project [19], for example, must be able to reliably transfer over 2 Petabytes of data per year over transatlantic networks between Europe and the United States.Recent experience [1,2] has demonstrated that actual aggregate TCP throughput realized by high performance applications is persis… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
132
0

Year Published

2003
2003
2013
2013

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 162 publications
(134 citation statements)
references
References 34 publications
2
132
0
Order By: Relevance
“…When competing with connections over a congested link, each of the parallel streams will be less likely to be selected for having their packets dropped, and therefore the aggregate amount of potential bandwidth which must go through premature congestion avoidance or slow start is reduced. An application opening N multiple TCP connections is in essence creating a large virtual MSS on the aggregate connection that is N times the MSS of a single connection [25].…”
Section: Parallel Streamsmentioning
confidence: 99%
See 1 more Smart Citation
“…When competing with connections over a congested link, each of the parallel streams will be less likely to be selected for having their packets dropped, and therefore the aggregate amount of potential bandwidth which must go through premature congestion avoidance or slow start is reduced. An application opening N multiple TCP connections is in essence creating a large virtual MSS on the aggregate connection that is N times the MSS of a single connection [25].…”
Section: Parallel Streamsmentioning
confidence: 99%
“…We used the simulator error model to simulate losses in the bottleneck link. This loss model was set to drop a The theoretical performance of parallel TCP streams over a range of packet loss rates follows the equation presented in [25], for the condition of this experiment (MSS = 1500 bytes, RTT= 100 ms, C = 1, and packet losses impacts parallel streams to the same extent). For the response function of HSTCP was used the equation defined in [18].…”
Section: Parallel Streams Transfermentioning
confidence: 99%
“…The model we study in this paper describes situations in which several TCP connections are opened by one user between the same source and destination. Such models have been extensively studied in the literature (see e.g., [5,10,11]). Losses (which is interpreted as a congestion signal) are used as signals to reduce the window size of one of the connections.…”
Section: Introductionmentioning
confidence: 99%
“…Parallel TCP streams are aggressive on a shared network and can steal bandwidth from competing TCP streams. Previous work [22] showed that a single application using N parallel TCP streams competing with k other streams will receive N/(N+k) of the network bandwidth, rather than the 1/(N+k) portion the other streams receive.…”
Section: Related Workmentioning
confidence: 99%
“…In previous work [22], we performed a series of network measurements between the University of Michigan in Ann Arbor and NASA AMES in Moffett Field, CA over a high speed path to determine if parallel TCP streams were effective at improving throughput, and to gain insight on the relationship between the number of parallel TCP streams and performance. We found that as the number of TCP sockets used in the parallel stream increased, aggregate throughput increased linearly until the bandwidth * delay product of the network was achieved.…”
Section: Introductionmentioning
confidence: 99%