Traffic often needs to be split over multiple equivalent backend servers, links, paths, or middleboxes. For example, in a load-balancing system, switches distribute requests of online services to backend servers. Hash-based approaches like Equal-Cost Multi-Path (ECMP) have low accuracy due to hash collision and incur significant churn during update. In a Software-Defined Network (SDN) the accuracy of traffic splits can be improved by crafting a set of wildcard rules for switches that better match the actual traffic distribution. The drawback of existing SDN-based traffic-splitting solutions is poor scalability as they generate too many rules for small rule-tables on switches. In this paper, we propose Niagara, an SDN-based traffic-splitting scheme that achieves accurate traffic splits while being extremely efficient in the use of rule-table space available on commodity switches. Niagara uses an incremental update strategy to minimize the traffic churn given an update. Experiments demonstrate that Niagara (1) achieves nearly optimal accuracy using only 1.2% − 37% of the rule space of the current state-of-art, (2) scales to tens of thousands of services with the constrained rule-table capacity and (3) offers nearly minimum churn.
During the past four years, several papers have proposed rules for sizing buffers in Internet core routers. Appenzeller et al. suggest that a link needs a buffer of size Ç( Ô AE ), where is the capacity of the link, and AE is the number of flows sharing the link. If correct, buffers could be reduced by 99% in a typical backbone router today without loss in throughput. Enachecsu et al., and Raina et al. suggest that buffers can be reduced even further to 20-50 packets if we are willing to sacrifice a fraction of link capacities, and if there is a large ratio between the speed of core and access links. If correct, this is a five orders of magnitude reduction in buffer sizes. Each proposal is based on theoretical analysis and validated using simulations. Given the potential benefits (and the risk of getting it wrong!) it is worth asking if these results hold in real operational networks. In this paper, we report buffer-sizing experiments performed on real networks -either laboratory networks with commercial routers as well as customized switching and monitoring equipment (UW Madison, Sprint ATL, and University of Toronto), or operational backbone networks (Level 3 Communications backbone network, Internet2, and Stanford). The good news: Subject to the limited scenarios we can create, the buffer sizing results appear to hold. While we are confident that the Ç( Ô AE ) will hold quite generally for backbone routers, the 20-50 packet rule should be ap-£
We explore a novel, free-space optics based approach for building data center interconnects. It uses a digital micromirror device (DMD) and mirror assembly combination as a transmitter and a photodetector on top of the rack as a receiver (Figure 1). Our approach enables all pairs of racks to establish direct links, and we can reconfigure such links (i.e., connect different rack pairs) within 12 µs. To carry traffic from a source to a destination rack, transmitters and receivers in our interconnect can be dynamically linked in millions of ways. We develop topology construction and routing methods to exploit this flexibility, including a flow scheduling algorithm that is a constant factor approximation to the offline optimal solution. Experiments with a small prototype point to the feasibility of our approach. Simulations using realistic data center workloads show that, compared to the conventional folded-Clos interconnect, our approach can improve mean flow completion time by 30-95% and reduce cost by 25-40%.
TCP is designed to operate in a wide range of networks. Without any knowledge of the underlying network and traffic characteristics, TCP is doomed to continuously increase and decrease its congestion window size to embrace changes in network or traffic. In light of emerging popularity of centrally controlled Software-Defined Networks (SDNs), one might wonder whether we can take advantage of the global network view available at the controller to make faster and more accurate congestion control decisions. In this paper, we identify the need and the underlying requirements for a congestion control adaptation mechanism. To this end, we propose OpenTCP as a TCP adaptation framework that works in SDNs. OpenTCP allows network operators to define rules for tuning TCP as a function of network and traffic conditions. We also present a preliminary implementation of OpenTCP in a ∼4000 node data center.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.