Abstract-In current Data Center Networks (DCNs), EqualCost MultiPath (ECMP) is used as the de-facto routing protocol. However, ECMP does not differentiate between short and long flows, the two main categories of flows depending on their duration (lifetime). This issue causes hot-spots in the network, affecting negatively the Flow Completion Time (FCT) and the throughput, the two key performance metrics in data center networks. Previous work on load balancing proposed solutions such as splitting long flows into short flows, using per-packet forwarding approaches, and isolating the paths of short and long flows. We propose DiffFlow, a new load balancing solution which detects long flows and forwards packets using Random Packet Spraying (RPS) with help of SDN, whereas the flows with small duration are forwarded with ECMP by default. The use of ECMP for short flows is reasonable, as it does not create the out-of-order problem; at the same time, RPS for long flows can efficiently help to load balancing the entire network, given that long flows represent most of the traffic in DCNs. The results show that our DiffFlow solution outperforms both the individual usage of either RPS or ECMP, while the overall throughput achieved is maintained at the level comparable to RPS.
In this paper, we study end-to-end service reliability in Data Center Networks (DCN) with flow and Service Function Chains (SFCs) parallelism. In our approach, we consider large flows to i) be split into multiple parallel smaller sub-flows; ii) SFC along with their VNFs are replicated into at least as many VNF instances as there are sub-flows, resulting in parallel sub-SFCs; and iii) all sub-flows are distributed over multiple shortest paths and processed in parallel by parallel sub-SFCs. We study service reliability as a function of flow and SFC parallelism and placement of parallel active and backup sub-SFCs within DCN. Based on the probability theory and by considering both server and VNF failures, we analytically derive for each studied VNF placement method the probability that all sub-flows can be successfully processed by the parallelized SFC without service interruption. We evaluate the amount of backup VNFs required to protect the parallelized SFC with a certain level of service reliability. The results show that the proposed flow and SFC parallelism in DCN can significantly increase end-to-end service reliability, while reducing the amount of backup VNFs required, as compared to traditional SFCs with serial traffic flows.
This paper considers service provider network design with a view of meeting application availability. Our primary goal is to design a service provider network using disparate network components and low-availability Virtual Network Functions (VNFs) while achieving high-availability Service Function Chains (SFCs). To this end, we attempt to answer the questions as to how much redundancy, and what type of redundancy should be incorporated in the network. Initially, we model a state-of-the-art provider architecture that is spread across the access, metro-core and backbone networks with associated discrete components such as switches, routers, optical equipment and data-centers. Our network design model leads to computing the amount of over-the-top (OTT) services that can be provisioned over a given network while achieving a particular availability. To this end, we formulate a constrained optimization model whose objective is profit maximization subject to availability measures that OTT services demand. We provide constraints of robustness that facilitate traffic churn. Four heuristics are also proposed with objectives that consider the breadth of the three key impacting parameters: VNF licensing cost, server utilization and delay optimization and the case for dynamic traffic. A simulation model presents comparative data for efficiency, latency and server utilization as well as validates our optimization model. The results stress the importance of an efficient optimization model in planning the network, as well as planning VNF placement ahead in time.
Abstract-Recently, physical layer security in the optical layer has gained significant traction. Security treats in optical networks generally impact the reliability of optical transmission. Linear Network Coding (LNC) can protect from both the security treats in form of eavesdropping and faulty transmission due to jamming. LNC can mix original data to become incomprehensible for an attacker and also extend original data by coding redundancy, thus protecting a data from errors injected via jamming attacks. In this paper, we study the effectiveness of LNC to balance reliable transmission and security in optical networks. To this end, we combine the coding process with data flow parallelization of the source and propose and compare optimal and randomized path selection methods for parallel transmission. The study shows that a combination of data parallelization, LNC and randomization of path selection increases security and reliability of the transmission. We analyze the so-called catastrophic security treat of the network and show that in case of conventional transmission scheme and in absence of LNC, an attacker could eavesdrop or disrupt a whole secret data by accessing only one edge in a network.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.