IEEE INFOCOM 2003. Twenty-Second Annual Joint Conference of the IEEE Computer and Communications Societies (IEEE Cat. No.03CH37
DOI: 10.1109/infcom.2003.1208667
|View full text |Cite
|
Sign up to set email alerts
|

Server-based inference of Internet link lossiness

Abstract: Abstract-We investigate the problem of inferring the packet loss characteristics of Internet links using server-based measurements. Unlike much of existing work on network tomography that is based on active probing, we make inferences based on passive observation of end-to-end client-server traffic. Our work on passive network tomography focuses on identifying lossy links (i.e., the trouble spots in the network). We have developed three techniques for this purpose based on Random Sampling, Linear Optimization,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
173
1

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 149 publications
(175 citation statements)
references
References 27 publications
1
173
1
Order By: Relevance
“…The end-to-end transmission rates are only accurate when we have a sufficiently large number of packets. To handle the cases where there are not sufficient data to calculate the end-to-end transmission rates, we have proposed to use the second technique, namely the Bayesian inference technique [9] that is less vulnerable to end-to-end loss rates but also much more complex. The idea here is to try to generate a set of possible link loss rates that can explain the observations of end-to-end data.…”
Section: Failure Location In Wireless Sensor Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…The end-to-end transmission rates are only accurate when we have a sufficiently large number of packets. To handle the cases where there are not sufficient data to calculate the end-to-end transmission rates, we have proposed to use the second technique, namely the Bayesian inference technique [9] that is less vulnerable to end-to-end loss rates but also much more complex. The idea here is to try to generate a set of possible link loss rates that can explain the observations of end-to-end data.…”
Section: Failure Location In Wireless Sensor Networkmentioning
confidence: 99%
“…If the majority of the possible loss rates of a link are bad, then it is likely that the link is bad, otherwise it is good. For details of the MCMC method, please refer to [9]. …”
Section: Failure Location In Wireless Sensor Networkmentioning
confidence: 99%
“…This simplifies network dimensioning, since it removes the packet loss related hop count constraint, and is feasible since the edge links are the most probable congestion points of a domain [21], whereas backbone links are overprovisioned [22]. This approach does not induce any states in the core network and does not require core routers to be aware of any signaling, which is desired for scalability and resilience reasons, and it is also proven to be very resource-efficient if resilience against network failures is required [23].…”
Section: Possible Practical Traffic Engineering Solutionsmentioning
confidence: 99%
“…The Peer Monitor takes the duration of time d, or d/k probes in each sub-stream to find out that P 2 , P 3 share a congested link. Knowing that the flow paths from the peers converge as they approach the consumer, and the paths usually remain unchanged for at least a day [9], it is quite safe for our algorithm to adopt a transitive induction approach to relate a new inference to existing ones inferred minutes before. Therefore, P 3 joins g 1 as a result because P 2 belongs to g 1 .…”
Section: Peer Monitormentioning
confidence: 99%
“…However, it does not mean that a local connection is free of congestion. As [9] suggests, packet loss (hence the congestion) in an end-to-end connection is usually caused by only a few hop-links in the path. Although there are more than one video server cluster to share the server and network loading, the system still suffers from single point of failure problem as the stream is still pushed from a single source over a single connection.…”
Section: Introductionmentioning
confidence: 99%