This memo presents two recommendations to the Internet community concerning measures to improve and preserve Internet performance. It presents a strong recommendation for testing, standardization, and widespread deployment of active queue management in routers, to improve the performance of today's Internet. It also urges a concerted effort of research, measurement, and ultimate deployment of router mechanisms to protect the Internet from flows that are not sufficiently responsive to congestion notification. Internet Performance Recommendations
This note describes a proposed addition of ECN (Explicit Congestion Notification) to IP. TCP is currently the dominant transport protocol used in the Internet. We begin by describing TCP's use of packet drops as an indication of congestion. Next we argue that with the addition of active queue management (e.g., RED) to the Internet infrastructure, where routers detect congestion before the queue overflows, routers are no longer limited to packet drops as an indication of congestion. Routers could instead set a Congestion Experienced (CE) bit in the packet header of packets from ECN-capable transport protocols. We describe when the CE bit would be set in the routers, and describe what modifications would be needed to TCP to make it ECN-capable. Modifications to other transport protocols (e.g., unreliable unicast or multicast, reliable multicast, other reliable unicast transport protocols) could be considered as those protocols are developed and advance through the standards process.
Most operating systems use interface interrupts to schedule network tasks. Interrupt-driven systems can provide low overhead and good latency at low offered load, but degrade significantly at higher arrival rates unless care is taken to prevent several pathologies. These are various forms of receive livelock , in which the system spends all of its time processing interrupts, to the exclusion of other necessary tasks. Under extreme conditions, no packets are delivered to the user application or the output of the system. To avoid livelock and related problems, an operating system must schedule network interrupt handling as carefully as it schedules process execution. We modified an interrupt-driven networking implementation to do so; this modification eliminates receive livelock without degrading other aspects of system performance. Our modifications include the use of polling when the system is heavily loaded, while retaining the use of interrupts ur.Jer lighter load. We present measurements demonstrating the success of our approach.
As IP technologies providing both tremendous capacity and the ability to establish dynamic secure associations between endpoints emerge, Virtual Private Networks (VPNs) are going through dramatic growth. The number of endpoints per VPN is growing and the communication pattern between endpoints is becoming increasingly hard to forecast. Consequently, users are demanding dependable, dynamic connectivity between endpoints, with the network expected to accommodate any traffic matrix, as long as the traffic to the endpoints does not overwhelm the rates of the respective ingress and egress links. We propose a new service interface, termed a hose, to provide the appropriate performance abstraction. A hose is characterized by the aggregate traffic to and from one endpoint in the VPN to the set of other endpoints in the VPN, and by an associated performance guarantee.Hoses provide important advantages to a VPN customer: (i) flexibility to send traffic to a set of endpoints without having to specify the detailed traffic matrix, and (ii) reduction in the size of access links through multiplexing gains obtained from the natural aggregation of the flows between endpoints.As compared with the conventional point to point (or customer-pipe) model for managing &OS, hoses provide reduction in the state information a customer must maintain.On the other hand, hoses would appear to increase the complexity of the already difficult problem of resource management to support &OS. To manage network resources in the face of this increased uncertainty, we consider both conventional statistical multiplexing techniques, and a new resiring technique based on online measurements.To study these performance issues, we run trace driven simulations, using traffic derived from AT&T's voice network, and from a large corporate data network. From the customer's perspective, we fmd that aggregation of traffic at the hose level provides significant multiplexing gains. From the provider's perspective, we find that the statistical multiplexing and resizing techniques deal effectively with uncertainties about the traffic, providing significant gains over the conventional alternative of a mesh of statically sized customer-pipes between endpoints.
As IP technologies providing both tremendous capacity and the ability to establish dynamic secure associations between endpoints emerge, Virtual Private Networks (VPNs) are going through dramatic growth. The number of endpoints per VPN is growing and the communication pattern between endpoints is becoming increasingly hard to forecast. Consequently, users are demanding dependable, dynamic connectivity between endpoints, with the network expected to accommodate any traffic matrix, as long as the traffic to the endpoints does not overwhelm the rates of the respective ingress and egress links. We propose a new service interface, termed a hose , to provide the appropriate performance abstraction. A hose is characterized by the aggregate traffic to and from one endpoint in the VPN to the set of other endpoints in the VPN, and by an associated performance guarantee.Hoses provide important advantages to a VPN customer: (i) flexibility to send traffic to a set of endpoints without having to specify the detailed traffic matrix, and (ii) reduction in the size of access links through multiplexing gains obtained from the natural aggregation of the flows between endpoints. As compared with the conventional point to point (or customer-pipe) model for managing QoS, hoses provide reduction in the state information a customer must maintain. On the other hand, hoses would appear to increase the complexity of the already difficult problem of resource management to support QoS. To manage network resources in the face of this increased uncertainty, we consider both conventional statistical multiplexing techniques, and a new resizing technique based on online measurements.To study these performance issues, we run trace driven simulations, using traffic derived from AT&T's voice network, and from a large corporate data network. From the customer's perspective, we find that aggregation of traffic at the hose level provides significant multiplexing gains. From the provider's perspective, we find that the statistical multiplexing and resizing techniques deal effectively with uncertainties about the traffic, providing significant gains over the conventional alternative of a mesh of statically sized customer-pipes between endpoints.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.