Programmable network hardware can run services traditionally deployed on servers, resulting in orders-of-magnitude improvements in performance. Yet, despite these performance improvements, network operators remain skeptical of in-network computing. The conventional wisdom is that the operational costs from increased power consumption outweigh any performance benefits. Unless in-network computing can justify its costs, it will be disregarded as yet another academic exercise. In this paper, we challenge that assumption, by providing a detailed power analysis of several in-network computing use cases. Our experiments show that in-network computing can be extremely power-efficient. In fact, for a single watt, a software system on commodity CPU can be improved by a factor of ×100 using an FPGA, and a factor of ×1000 utilizing ASIC implementations. However, this efficiency depends on the system load. To address changing workloads, we propose in-network computing on demand, where services can be dynamically moved between servers and the network. By shifting the placement of services on-demand, data centers can optimize for both performance and power efficiency.
In-network computing accelerates applications natively running on the host by executing them within network devices. While in-network computing offers significant performance improvements, its limitations and design trade-offs have not been explored. To usefully and efficiently run applications within the network, we first need to understand the implications of their design. In this work we introduce LaKe, a Layered Key-Value Store design, running as an in-network application. LaKe is a scalable design, enabling the exploration of design decisions and their effect on throughput, latency and power efficiency. LaKe achieves full line rate throughput, while maintaining a latency of 1.1µs and better power efficiency than existing hardware based memcached designs.
As data sets grow rapidly in size and the number, an outlier detection that filters unnecessary normal information becomes important. In this paper, we propose to move the unsupervised outlier detection from an application layer to a network interface card (NIC). Only anomalous items or events are received for a network protocol stack and the other packets are discarded at the NIC. The demands for storage and computation costs at a host are thus dramatically reduced. However, because normal items are discarded at the NIC and the application layer can no longer know what is normal, in our approach, the application at the host periodically peeks at the NIC buffer. We select an outlier detection based on the Mahalanobis distance as one of the simplest algorithms. Our approach is implemented on an FPGA-based NIC that has 10GbE interfaces. The sampling frequency of the NIC buffer vs. outlier detection precision is analyzed. Real experiments using the FPGA NIC demonstrate a 14,000,000 samples-per-second throughput in performance, which is close to the 10GbE line rate.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.