In order to simplify the management of the traditional network, software-defined networking (SDN) has been proposed as a promising paradigm shift that decouples control plane and data plane, providing programmability to configure the network. With the deployment and the applications of SDN, researchers have found that the controller placement directly affects network performance in SDN. In this paper, the state of the art of controller placement problem is surveyed from the perspective of optimization objective. First, we introduce the overview of SDN and controller placement problem. Then, we classify this paper of controller placement problem into four aspects (latency, reliability, and cost and multi-objective) depending on their objective and analyze specific algorithms in different application scenarios. Finally, we identify some relevant open issues and research challenge to deal with in the future and conclude the controller placement problem.
Recently there has been a surge of research on improving the communication efficiency of distributed training. However, little work has been done to systematically understand whether the network is the bottleneck and to what extent.In this paper, we take a first-principles approach to measure and analyze the network performance of distributed training. As expected, our measurement confirms that communication is the component that blocks distributed training from linear scale-out. However, contrary to the common belief, we find that the network is running at low utilization and that if the network can be fully utilized, distributed training can achieve a scaling factor of close to one. Moreover, while many recent proposals on gradient compression advocate over 100× compression ratio, we show that under full network utilization, there is no need for gradient compression in 100 Gbps network. On the other hand, a lower speed network like 10 Gbps requires only 2×-5× gradients compression ratio to achieve almost linear scale-out. Compared to application-level techniques like gradient compression, network-level optimizations do not require changes to applications and do not hurt the performance of trained models. As such, we advocate that the real challenge of distributed training is for the network community to develop high-performance network transport to fully utilize the network capacity and achieve linear scale-out.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.