Abstract. Application layer networks are software architectures that allow the provisioning of services requiring a huge amount of resources by connecting large numbers of individual computers, like in Grid or Peer-to-Peer computing. Controlling the resource allocation in those networks is nearly impossible using a centralized arbitrator. The network simulation project CATNET will evaluate a decentralized mechanism for resource allocation, which is based on the economic paradigm of the Catallaxy, against a centralized mechanism using an arbitrator object. In both versions, software agents buy and sell network services and resources to and from each other. The economic model is based on self-interested maximization of utility and self-interested cooperation between agents. This article describes the setup of money and message ows both for centralized and decentralized coordination in comparison.
Efficient resource allocation in dynamic large-scale environments is one of the cha llenges of Grids. In centralized economic-based allocation approaches, the user requests can be matched to the fastest, cheapest or most available resource. This approach, however, shows limitations in scalability and in dynamic environments. In this paper, we explore a decentralized economic approach for resource allocation in Grid markets based on the Catallaxy paradigm. Catallactic agents discover selling nodes in the resource and service Grid markets, and negotiate with each other maximizing their utility by following a strategy. By means of simulations, we evaluate the behavior of the approach, its resource allocation efficiency and its performance with different demand loads in a number of Grid density and dynamic environments. Our results indicate that while the decentralized economic approach based on Catallaxy applied to Grid markets shows similar efficiency to a centralized system, its decentralized operation provides greater advantages: scalability to demand and offer, and robustness in dynamic environments.
Application-layer networks (ALN) Allocation of Resources in Application Layer NetworksApplication-layer networks (ALN) are software architectures that coordinate the provisioning of services requiring a huge amount of resources by connecting large numbers of individual computers. Such global Internetbased networks, like today's Grids [2] and Peer-to-PeerComputing [14], take advantage of such infrastructures with applications like multicast services for global audiences, storage repositories of peta-scale data sets, or parallel computing applications requiring teraflops of processing power.Such applications are executed in m ultiple resource locations distributed throughout the Internet, coordinated on the application layer using a dedicated network, the ALN. An ALN scenario would be the distributed provisioning of web services for Adobe's Acrobat (for creating PDF files). Her e, word-processor client programs would transparently address the nearest/ cheapest Acrobat service instance in order to create PDF files. The overall objective of the ALN would be (a) to always provide access to some Acrobat service instance, such that a minimum number of service demands have to be rejected, and (b) to optimize network parameters such as provisioning and transmission costs. This paper assumes that the future deve lopment of these applications will lead to clients paying for the access to a service and the corresponding on-or offline exchange of payment; the individual goal of a client would become to access a service cheaply, while services may try to maximize income.In order to keep an ALN operational, service control and resource allocation mechanisms are required. Their basic purpose would be to match service supply and demand, in the likely case of multiple, redundant service instances, to meet those objectives. The simple service discovery mechanisms available today in d ecentralized ne tworks (e.g. Jini [19]) seldom provide such functionality, as the case of redundant service instances is yet rare.However, a realization of these mechanisms by employing a centralized coordinator instance (auctioneer, arbitrator, dispatcher, scheduler, m anager), like e.g. in GLOBUS [8] or CONDOR-G [9], has several drawbacks.First, ALN and the underlying networks are very dynamic and fast changing systems: service demands and nodes connectivity changes are frequent, and new different services are created and composed continuously. Information collected from the network is considered to be outdated when it reaches the coordinator; any solution computed on the basis of this information tries to optimize a past and inconsistent state of the network. Dynamic ALN need a continuous, real-time coordination mechanism, which reflects the changes in the environment.A second related property is that the coordinator should have global knowledge on the state of the network. This is mostly achieved by calculating the time steps such that actual status information from all nodes arrives safely at the coordination instance. However, if the ...
Grid computing has recently become an important paradigm for managing computationally demanding applications, composed of a collection of services. The dynamic discovery of services, and the selection of a particular service instance providing the best value out of the discovered alternatives, poses a complex multi-attribute n:m allocation decision problem, which is often solved using a centralized resource broker. To manage complexity, this article proposes a two-layer architecture for service discovery in such Application Layer Networks (ALN). The first layer consists of a service market in which complex services are translated to a set of basic services, which are distinguished by price and availability. The second layer provides an allocation of services to appropriate resources in order to enact the specified services. This framework comprises the foundations for a later comparison of centralized and decentralized market mechanisms for allocation of services and resources in ALNs and Grids.
Future "on-demand" computing systems, often depicted as potentially large scale and complex Service-Oriented Architectures, will need innovative management approaches for controlling and matching services demand and supply. Centralized optimization approaches reach their bounds with increasing network size and number of nodes. The search for decentralized approaches has led to build on self-organization concepts like Autonomic Computing, which draw their inspiration from Biology. This article shows how an alternative self-organization concept from Economics, the Catallaxy concept of F.A. von Hayek, can be realized for allocating service supply and demand in a distributed "on-demand" web services network. Its implementation using a network simulator allows evaluating the approach against a centralized resource broker, by dynamically varying connection reliability and node density in the network. Exhibiting Autonomic Computing properties, the Catallaxy realization outperforms a centralized broker in highly dynamic environments.2
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.