The realization of end-to-end quality of service (QoS) guarantees in emerging network-based applications requires mechanisms that support first dynamic discovery and then advance or immediate reservation of resources that will often be heterogeneous in type and implementation and independently controlled and administered. We propose the Globus Architecture for Reservation and Allocation (GARA) to address these four issues. GARA treats both reservations and computational elements such as processes, network flows, and memory blocks as first class entities, allowing them to be created, monitored, and managed independently and uniformly. It simplifies management of heterogeneous resource types by defining uniform mechanisms for computers, networks, disk, memory, and other resources. Layering on these standard mechanisms, GARA enables the construction of application-level co-reservation and coallocation libraries that applications can use to dynamically assemble collections of resources, guided by both application QoS requirements and the local administration policy of individual resources. We describe a prototype GARA implementation that supports three different resource typesparallel computers, individual CPUs under control of the Dynamic Soft Real-Time scheduler, and Integrated Services networks-and provide performance results that quantify the costs of our techniques.
The Internet of Things needs for computing power and storage are expected to remain on the rise in the next decade. Consequently, the amount of data generated by devices at the edge of the network will also grow. While cloud computing has been an established and effective way of acquiring computation and storage as a service to many applications, it may not be suitable to handle the myriad of data from IoT devices and fulfill largely heterogeneous application requirements. Fog computing has been developed to lie between IoT and the cloud, providing a hierarchy of computing power that can collect, aggregate, and process data from/to IoT devices. Combining fog and cloud may reduce data transfers and communication bottlenecks to the cloud and also contribute to reduced latencies, as fog computing resources exist closer to the edge. This paper examines this IoT-Fog-Cloud ecosystem and provides a literature review from different facets of it: how it can be organized, how management is being addressed, and how applications can benefit from it. Lastly, we present challenging issues yet to be addressed in IoT-Fog-Cloud infrastructures. low latency, and mobile applications. The centralized cloud data centers are often physically and/or logically distant from the cloud client, implying communication and data transfers to traverse multiple hops, which introduces delays and consumes network bandwidth of edge and core networks [2].The widespread adoption of cloud computing, combined with the ever increasing ability of edge devices to run heterogeneous applications that generate and consume all kinds of data from a variety of sources, requires novel distributed computing infrastructures that can cope with such heterogeneous application requirements. Computing infrastructures that enact applications at edge devices have started to appear in recent years [3,4], improving aspects such as response time and reducing bandwidth use. Combining the ability of running smaller, localized applications at the edge with the high-capacity from the cloud, fog computing has emerged as an paradigm that can support heterogeneous requirements of small and large applications through multiple layers of a computational infrastructure that combines resources from the edge of the network as well as from the cloud [5].In this paper, we aim at identifying and reviewing the main aspects and challenges that make the combination of fog computing and cloud computing suitable for all kinds of applications leveraged by the Internet of Things. We discuss aspects from the infrastructure (processing, networking, protocols, and infrastructure for 5G support) to applications (smart cities, urban computing, and industry 4.0), passing through the management complexity of the distributed IoT-fog-cloud system (services, resource allocation and optimization, energy consumption, data management and locality, devices federation and trust, and business and service models).In the next section we introduce concepts and definitions for Internet of Things (IoT), cloud computi...
Remote sensing data have become very widespread in recent years, and the exploitation of this technology has gone from developments mainly conducted by government intelligence agencies to those carried out by general users and companies. There is a great deal more to remote sensing data than meets the eye, and extracting that information turns out to be a major computational challenge. For this purpose, high performance computing (HPC) infrastructure such as clusters, distributed networks or specialized hardware devices provide important architectural developments to accelerate the computations related with information extraction in remote sensing. In this paper, we review recent advances in HPC applied to remote sensing problems; in particular, the HPC-based paradigms included in this review comprise multiprocessor systems, large-scale and heterogeneous networks of computers, grid and cloud computing environments, and hardware systems such as field programmable gate arrays (FPGAs) and graphics processing units (GPUs). Combined, these parts deliver a snapshot of the state-of-the-art and most recent developments in those areas, and offer a thoughtful perspective of the potential and emerging challenges of applying HPC paradigms to remote sensing problems
The potential for faults in disttiuted computing systems is a significant complicating factor for appiicatwn developers. While a variety of techniques exist for detecting and cmvcting fault+ the implementatwn of these techniques in a pa-r context can be dsflcult. Hence, we propose a fawlt &tectwn savice &-signed to be inc-qomted, in a modular fashion, into distributed wnaputing systems, tools, or applications. This setice uses well-known techniques based on unreliable fault &tectors to detect and report component failure, while allowing the user to tm&off timeliness of reporting against fde positive nates. We describe the architecture of this Sewice, nqwrt on experimental Rsults that quantifi its wst and accumcy, and &s* its use in two applications, monitoring the status of system wmponents of the GUSTO computational grid testbed and as part of the NetSolve network-enabled numerical solver.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.