It is increasingly common for computer users to have access to several computers on a network, and hence t o b e able to execute many of their tasks on any of several computers. The choice o f w h i c h c omputers execute which tasks is commonly determined b y users based o n a k n o w l e dge of computer speeds for each task and the current load on each computer. A number of task scheduling systems have been developed that balance the load of the computers on the network, but such systems tend to minimize the idle time of the computers rather than minimize the idle time of the users. This paper foc u s e s o n t h e b ene ts that can be achieved when the scheduling system considers both the computer availabilities and the performance o f each task on each computer. The SmartNet resource scheduling system is described and compared t o t w o di erent resource a l l o cation strategies: load balancing and user directed assignment. Results are p r esented where t h e o p eration of hundreds of di erent networks of computers running thousands of di erent mixes of tasks are simulated i n a b atch environment. These results indicate that, for the computer environments
Even with the considerable advances in the development of middleware solutions, there is still a substantial gap in Internet of Things (IoT) and highperformance computing (HPC) integration. It is not possible to expose services such as processing, storage, sensing, security, context awareness, and actuating in a unified manner with the existing middleware solutions. The consequence is the utilization of several solutions with their particularities, thus requiring different skills. Besides that, the users have to solve the integration and all heterogeneity issues. To reduce the gap between IoT and HPC technologies, we present the JavaCá&Lá (JCL), a middleware used to help the implementation of distributed user-applications classified as IoT-HPC. This ubiquity is possible because JCL incorporates (1) a single application programming interface to program different device categories; (2) the support for different programming models; (3) the interoperability of sensing, processing, storage, and actuating services; (4) the integration with MQTT technology; and (5) security, context awareness, and actions services introduced through JCL application programming interface. Experimental evaluations demonstrated that JCL scales when doing the IoT-HPC services. Additionally, we identify that customized JCL deployments become an alternative when Java-Android and vice-versa code conversion is necessary. The MQTT brokers usually are faster than JCL HashMap sensing storage, but they do not perform distributed, so they cannot handle a huge amount of sensing data. Finally, a short example for monitoring moving objects exemplifies JCL facilities for IoT-HPC development. KEYWORDShigh-performance computing, Internet of Things, middleware 584 Big data, 1 Internet of Things (IoT), 2 and elastic cloud services 3 are technologies that provide this new decentralized, dynamic, and communication-intensive society. According to El Baz et al, 4 the demand of integration of Internet of Things (IoT) and high-performance computing (HPC) exists, and it will increase soon motivated by different applications, like smart buildings, smart cities, or smart logistics. In the work of McKee et al, 5 it used the concept Internet of Simulation as an IoT extension, so the authors advocated that HPC, cloud, edge, 6 and fog 7 computing is not enough for smart cities demands. In summary, these new applications are requiring both alternatives (IoT and HPC, for instance) in a single middleware solution, but the integration imposes new challenges related with heterogeneity because both technologies must communicate and operate together. Other challenges related to fundamental IoT and HPC requirements, like deployment, code refactoring, performance, scheduling, and fault tolerance, are also important and hard to be solved when integration is mandatory. We understand that fog and edge computing are decentralizing cloud services even more to reduce data transfer latency, but without evidence of IoT services support, thus no integration is achieved while using these ...
a b s t r a c tWe present a new full cube computation technique and a cube storage representation approach, called the multidimensional cyclic graph (MCG) approach. The data cube relational operator has exponential complexity and therefore its materialization involves both a huge amount of memory and a substantial amount of time. Reducing the size of data cubes, without a loss of generality, thus becomes a fundamental problem. Previous approaches, such as Dwarf, Star and MDAG, have substantially reduced the cube size using graph representations. In general, they eliminate prefix redundancy and some suffix redundancy from a data cube. The MCG differs significantly from previous approaches as it completely eliminates prefix and suffix redundancies from a data cube. A data cube can be viewed as a set of sub-graphs. In general, redundant sub-graphs are quite common in a data cube, but eliminating them is a hard problem. Dwarf, Star and MDAG approaches only eliminate some specific common sub-graphs. The MCG approach efficiently eliminates all common sub-graphs from the entire cube, based on an exact sub-graph matching solution. We propose a matching function to guarantee one-to-one mapping between sub-graphs. The function is computed incrementally, in a top-down fashion, and its computation uses a minimal amount of information to generate unique results. In addition, it is computed for any measurement type: distributive, algebraic or holistic. MCG performance analysis demonstrates that MCG is 20-40% faster than Dwarf, Star and MDAG approaches when computing sparse data cubes. Dense data cubes have a small number of aggregations, so there is not enough room for runtime and memory consumption optimization, therefore the MCG approach is not useful in computing such dense cubes. The compact representation of sparse data cubes enables the MCG approach to reduce memory consumption by 70-90% when compared to the original Star approach, proposed in [33]. In the same scenarios, the improved Star approach, proposed in [34], reduces memory consumption by only 10-30%, Dwarf by 30-50% and MDAG by 40-60%, when compared to the original Star approach. The MCG is the first approach that uses an exact sub-graph matching function to reduce cube size, avoiding unnecessary aggregation, i.e. improving cube computation runtime.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.