Wide area distributed cloud computing, where data are processed and stored at micro datacenters (P PDCs) located around terminals, improves response time and reduces the amount of traffic in wide area networks. However, it is difficult for the cloud to maintain service in the case of PDC failure because the robustness of PDCs is substantially lower than that of conventional DCs. We propose a flow control system that transfers the flow to other PDCs by converting the L2, L3, and L4 headers of received packets via OpenFlow when PDC failures occur. The proposed system enables flow to be transferred to other PDCs even from M2M terminals such as sensors that are unable to change the destination IP address. We also developed a distributed control system to solve the critical problem of switching delay caused by the packet-in mechanism used when applying the OpenFlow to the distributed cloud. Emulation results showed that throughput in the distributed control became stable 12 times as quickly as that in the centralized control when flow was transferred.
This paper presents an analysis of a performance bottleneck in enterprise file servers using Linux and proposes a modification to this operation system for avoiding the bottleneck. The analysis shows that metadata cache deallocation of current Linux causes large latency in file-request processing when the operational throughput of a file server becomes large. To eliminate the latency caused by metadata cache deallocation, a new method, called "split reclaim," which divides metadata cache deallocation from conventional cache deallocation, is proposed. It is experimentally shown that the split-reclaim method reduces the worst response time by more than 95% and achieves three times higher throughput under a metadata-intensive workload. The split-reclaim method also reduces latency caused by cache deallocation under a general file-server workload by more than 99%. These results indicate that the split-reclaim method can eliminate metadata cache deallocation latency and make possible the use of commodity servers as enterprise file servers.Index Terms-Cache memory, file servers, memory management, scalability.
SUMMARYVarious services have been deployed on the Internet, and the role of Internet services as a societal infrastructure is becoming increasingly important. As demand for services on the Internet increases, the burdens placed on servers that provide those services increase, which leads to reductions in the quality of services, such as service delays, or temporary stoppage of services in the worst case. Methods of distributing the demand for a service among multiple computers by preparing multiple computers that are capable of providing the same service is commonly employed as a way of minimizing reduction in the quality of service. However, since the demand for a service on the Internet continually fluctuates, it is difficult to predict the demand in order to have computing resources in place ahead of time. In this paper, a basis is therefore proposed for automatically adjusting the total amount of computing resources for providing a service according to the variation in demand for the service. The server group formed according to this basis is referred to as an elastic server group. In an elastic server group, several computers are working that communicate with each other using a Peer-to-Peer system, and the number of computers for providing the service is increased or decreased. In this paper, an elastic server group and a basic mechanism for increasing or decreasing the number of computers for providing a server in the elastic server group are proposed, and the effectiveness of this approach is verified using a simulation.
SUMMARYIn a wide-area distributed environment such as the Internet, users exchange information by using a common application protocol. Such an application layer protocol forms the basis of information communications, and once it is widely spread, it is difficult to replace the protocol by a new improved or extended protocol. In order to enhance the spreading use of the protocol, the following conditions are necessary: (1) Both the server and clients should be able to use the new protocol. (2) It should be possible to handle the new protocol without modifying the existing client application. (3) The user should be able to handle the new protocol transparently. This paper presents a method in which the above three conditions are satisfied by using a mobile code technique in order to facilitate the spread of a new application layer protocol. In the proposed method, the mobile code that converts the existing protocol to the new protocol is distributed transparently to clients, and the spreading use of the new protocol is realized. By making the mobile code operate within the restrictions of a sandbox, the security of the client computer against defects of the mobile code is improved. The proposed method is independent of the architecture of the distributed mobile code. In this study, the mobile code is implemented in i386 native code.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.