As emerging network technologies and softwareization render networks more flexible, the question arises of how to exploit these flexibilities for optimization. Given the complexity of the involved network protocols and the context in which networks are operating, such optimizations are increasingly difficult to perform. An interesting vision in this regard are "self-driving" networks: networks which measure, analyze and control themselves in an automated manner, reacting to changes in the environment (e.g., demand), while exploiting existing flexibilities to optimize themselves. A fundamental challenge faced by any (self-)optimizing network concerns the limited knowledge about future changes in the demand and environment in which the network is operating. Indeed, given that reconfigurations entail resource costs and may take time, an "optimal" network configuration for the current demand and environment may not necessarily be optimal also in the near future. Thus, it is desirable that (self-)optimizations also prepare the network for possibly unexpected events. This paper makes the case for empowering self-driving networks: empowerment is an information-centric measure which accounts for how "prepared" a network is and how much flexibility is preserved over time. While empowerment has been successfully employed in other domains such as robotics, we are not aware of any applications in networking. As a case study for the use of empowerment in networks, we consider self-driving networks offering topological flexibilities, i.e., reconfigurable edges.
The bandwidth and latency requirements of modern datacenter applications have led researchers to propose various topology designs using static, dynamic demand-oblivious (rotor), and/or dynamic demand-aware switches. However, given the diverse nature of datacenter traffic, there is little consensus about how these designs would fare against each other. In this work, we analyze the throughput of existing topology designs under different traffic patterns and study their unique advantages and potential costs in terms of bandwidth and latency ''tax''. To overcome the identified inefficiencies, we propose Cerberus, a unified, two-layer leaf-spine optical datacenter design with three topology types. Cerberus systematically matches different traffic patterns with their most suitable topology type: e.g., latency-sensitive flows are transmitted via a static topology, all-to-all traffic via a rotor topology, and elephant flows via a demand-aware topology. We show analytically and in simulations that Cerberus can improve throughput significantly compared to alternative approaches and operate datacenters at higher loads while being throughput-proportional.
A network virtualization hypervisor for Software Defined Networking (SDN) is the essential component for the realization of virtual SDN networks (vSDNs). Virtualizing software defined networks enables tenants to bring their own SDN controllers in order to individually program the network control of their virtual SDN networks. A hypervisor acts as an intermediate layer between the tenant SDN controllers and their respective virtual SDN networks. The hypervisor consists of the network functions that are necessary for virtualization, e.g., translation or isolation functions. For scalability, the hypervisor can be realized via multiple physically distributed instances each hosting the needed virtualization functions. In this way, the physical locations of the instances, which realize the hypervisor, may impact the overall performance of the virtual SDN networks. Network virtualization adds new dimensions to the general SDN controller placement problem. This paper initiates the study of the network hypervisor placement problem (HPP). The HPP targets the following questions: How many hypervisor instances are needed? Where should the hypervisor instances be placed in the network? For our study of the HPP, we provide a mathematical model that solves the HPP for a case where node and link capacity constraints are not considered. We propose four latency metrics for optimizing placement solutions based on our model for vSDNs. Covering a real network topology, our evaluation quantifies the trade-offs between the new metrics when used as objectives. Furthermore, we analyze the impact of the physical network topology on the optimization results and identify potentials for improvement, e.g., in terms of runtime.
Large content providers, known as hyper-giants, are responsible for sending the majority of the content trac to consumers. These hyper-giants operate highly distributed infrastructures to cope with the ever-increasing demand for online content. To achieve commercial-grade performance of Web applications, enhanced enduser experience, improved reliability, and scaled network capacity, hyper-giants are increasingly interconnecting with eyeball networks at multiple locations. This poses new challenges for both (1) the eyeball networks having to perform complex inbound trac engineering, and (2) hyper-giants having to map end-user requests to appropriate servers. We report on our multi-year experience in designing, building, rolling-out, and operating the rst-ever large scale system, the Flow Director, which enables automated cooperation between one of the largest eyeball networks and a leading hyper-giant. We use empirical data collected at the eyeball network to evaluate its impact over two years of operation. We nd very high compliance of the hyper-giant to the Flow Director's recommendations, resulting in (1) close to optimal user-server mapping, and (2) 15% reduction of the hyper-giant's trac overhead on the ISP's long-haul links, i.e., benets for both parties and end-users alike.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.