Abstract-Network Service Chaining (NSC) is a service deployment concept that promises increased flexibility and cost efficiency for future carrier networks. NSC has received considerable attention in the standardization and research communities lately. However, NSC is largely undefined in the peer-reviewed literature. In fact, a literature review reveals that the role of NSC enabling technologies is up for discussion, and so are the key research challenges lying ahead. This paper addresses these topics by motivating our research interest towards advanced dynamic NSC and detailing the main aspects to be considered in the context of carrier-grade telecommunication networks. We present design considerations and system requirements alongside use cases that illustrate the advantages of adopting NSC. We detail prominent research challenges during the typical lifecycle of a network service chain in an operational telecommunications network, including service chain description, programming, deployment, and debugging, and summarize our security considerations. We conclude this paper with an outlook on future work in this area.
With the ever increasing amount of traffic, scalability is probably the most important factor that differentiates several existing approaches to traffic classification. This paper focuses on payload-based classification and compares the results obtained through a "lightweight" traffic classification approach with the ones obtained with a "completely stateful" approach, demonstrating that the first approach, albeit less precise, is still appropriate for a large class of applications.
Network Functions Virtualization (NFV) aims to transform network functions into software images, executed on standard, high-volume hardware. This paper focuses on the case in which a massive number of (tiny) network function instances are executed simultaneously on the same server and presents our experience in the design of the components that move the traffic across those functions, based on the primitives offered by the Intel DPDK framework. This paper proposes different possible architectures, it characterizes the resulting implementations, and it evaluates their applicability under different constraints.
The extended Berkeley Packet Filter (eBPF) is a recent technology available in the Linux kernel that enables flexible data processing. However, so far the eBPF was mainly used for monitoring tasks such as memory, CPU, page faults, traffic, and more, with a few examples of traditional network services, e.g., that modify the data in transit. In fact, the creation of complex network functions that go beyond simple proof-ofconcept data plane applications has proven to be challenging due to the several limitations of this technology, but at the same time very promising due to some characteristics (e.g., dynamic recompilation of the source code) that are not available elsewhere. Based on our experience, this paper presents the most promising characteristics of this technology and the main encountered limitations, and we envision some solutions that can mitigate the latter. We also summarize the most important lessons learned while exploiting eBPF to create complex network functions and, finally, we provide a quantitative characterization of the most significant aspects of this technology.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.