1998
DOI: 10.1145/384265.291048
|View full text |Cite
|
Sign up to set email alerts
|

Locality-aware request distribution in cluster-based network servers

Abstract: We consider cluster-based network servers in which a front-end directs incoming requests to one of a number of back-ends. Specifically, we consider content-based request distribution:

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
124
0
4

Year Published

2000
2000
2019
2019

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 93 publications
(128 citation statements)
references
References 10 publications
(3 reference statements)
0
124
0
4
Order By: Relevance
“…In this section, we describe and analyse the most recent architectures that implement content-aware load balancing. Some works prove that by using the content of requests and loading information from the servers, more flexible and intelligent distributing algorithms can be developed [1,3,11,14,18,49,60,62,74].…”
Section: Content-aware Load Balancing Architecturesmentioning
confidence: 99%
See 1 more Smart Citation
“…In this section, we describe and analyse the most recent architectures that implement content-aware load balancing. Some works prove that by using the content of requests and loading information from the servers, more flexible and intelligent distributing algorithms can be developed [1,3,11,14,18,49,60,62,74].…”
Section: Content-aware Load Balancing Architecturesmentioning
confidence: 99%
“…Hence, the same ISS and IRS numbers have to be used in both connections. Pai et al in [60] introduce some modifications to Hunt's Hand-off and apply it to their request distribution algorithm (Locality-Aware Request Distribution (LARD)) that is covered in Section 6.1. Aron et al include some modifications to the TCP Hand-off mechanism to permit a granularity of individual requests when using HTTP/1.1 persistent connections in [3].…”
Section: Tcp Hand-offmentioning
confidence: 99%
“…Since no service time information was recorded, the simulation estimated the service time as the sum of the (constant) time to establish and close a connection, and the (variable, size-dependent) time required to retrieve and transfer a file. The justification for this estimation method may be found in [17,19,23]. The selection of the particular data trace was motivated by the fact that it exhibits arrival-rate fluctuations corresponding to light, medium and heavy loadings in this temporal order, as evidenced by figures 3 and 4.…”
Section: Multi-threaded Back-end Serversmentioning
confidence: 99%
“…The dispatcher receives incoming requests and then decides how to assign them to back-end servers, which in turn serve the requests according to some discipline. The dispatcher is also responsible for passing incoming data pertaining to a job from a client to a back-end server, so that, for each job in progress at a back-end server there is an open connection between the dispatcher and that server [17,23].…”
Section: Introductionmentioning
confidence: 99%
“…For example, the selected server may not have sufficient disk bandwidth to admit a new multimedia presentation along with other requests being served. Content-based routing techniques have been proposed in [15,17]. However, these techniques merely …”
Section: Introductionmentioning
confidence: 99%