The capacity of ad hoc wireless networks is constrained by the interference between concurrent transmissions from neighboring nodes. Gupta and Kumar have shown that the capacity of an ad hoc network does not scale well with the increasing number of nodes in the system when using omnidirectional antennas [6]. We investigate the capacity of ad hoc wireless networks using directional antennas. In this work, we consider arbitrary networks and random networks where nodes are assumed to be static.In arbitrary networks, due to the reduction of the interfer-Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
Continuing the process of improvements made to TCP through the addition of new algorithms in Tahoe and Reno, TCP SACK aims to provide robustness to TCP in the presence of multiple losses from the same window. In this paper we present analytic models to estimate the latency and steady-state throughput of TCP Tahoe, Reno, and SACK and validate our models using both simulations and TCP traces collected from the Internet. In addition to being the first models for the latency of finite Tahoe and SACK flows, our model for the latency of TCP Reno gives a more accurate estimation of the transfer times than existing models. The improved accuracy is partly due to a more accurate modeling of the timeouts, evolution of cwnd during slow start and the delayed ACK timer. Our models also show that, under the losses introduced by the droptail queues which dominate most routers in the Internet, current implementations of SACK can fail to provide adequate protection against timeouts and a loss of roughly more than half the packets in a round will lead to timeouts. We also show that with independent losses SACK performs better than Tahoe and Reno and, as losses become correlated, Tahoe can outperform both Reno and SACK.
We propose an explicit rate indication scheme for congestion avoidance in ATM networks. In this scheme, the network switches monitor their load on each link, determining a load factor, the available capacity, and the number of currently active virtual channels. This information is used to advise the sources about the rates at which they should transmit. The algorithm is designed to achieve e ciency, fairness, controlled queueing delays, and fast transient response. The algorithm is also robust to measurement errors caused due to variation in ABR demand and capacity. We present performance analysis of the scheme using both analytical arguments and simulation results. The scheme is being implemented by several ATM switch manufacturers. 1We begin by brie y examining the ABR service. In section 3, we describe basic concepts such as the switch model and design goals. Section 4 describes the algorithm. Section 5 presents representative simulations to show that the scheme works under stressful conditions; we also present analytical arguments in appendix A. Finally, our conclusions are presented in section 6.2 The ABR Control Mechanism ATM networks o er ve classes of service: constant bit rate CBR, real-time variable bit rate rt-VBR, non-real time variable bit rate nrt-VBR, available bit rate ABR, and unspeci ed bit rate UBR. Of these, ABR and UBR are designed for data tra c, which has a bursty unpredictable behavior.The UBR service is simple in the sense that users negotiate only their peak cell rates PCR when setting up the connection. If many sources send tra c at the same time, the total tra c at a switch may exceed the output capacity causing delays, bu er over ows, and loss. The network tries to minimize the delay and loss using intelligent bu er allocation 15 , cell drop 16 and scheduling, but makes no guarantees to the application.The ABR service provides better service for data tra c by periodically advising sources about the rate at which they should be transmitting. The switches monitor their load, compute the available bandwidth and divide it fairly among active o ws. This allows competing sources to get a fair share of the bandwidth and not be starved due to a small set of rogue sources. The feedback from the switches to the sources is sent in Resource Management RM cells which are sent periodically by the sources and turned around by the destinations see gure 1.The RM cells contain the source current cell rate CCR, and several other elds that can be used by the switches to provide feedback to the source. These elds are: Explicit Rate ER, Congestion Indication CI Flag, and No Increase NI Flag. The ER eld indicates the rate that the network can support at the particular instant in time. When starting at the source, the ER eld is usually set to the PCR, and the CI and NI ags are clear. On the path, each switch reduces the ER eld to the maximum rate it can support and sets CI or NI if necessary 12 .The RM cells owing from the source to the destination are called forward RM cells FRMs while those
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.