Internet of Things (IoT) has posed new requirements to the underlying processing architecture, specially for real-time applications, such as event-detection services. Complex Event Processing (CEP) engines provide a powerful tool to implement these services. Fog computing has raised as a solution to support IoT real-time applications, in contrast to the Cloud-based approach. This work is aimed at analysing a CEP-based Fog architecture for real-time IoT applications that uses a publish-subscribe protocol. A testbed has been developed with low-cost and local resources to verify the suitability of CEP-engines to low-cost computing resources. To assess performance we have analysed the effectiveness and cost of the proposal in terms of latency and resource usage, respectively. Results show that the fog computing architecture reduces event-detection latencies up to 35%, while the available computing resources are being used more efficiently, when compared to a Cloud deployment. Performance evaluation also identifies the communication between the CEP-engine and the final users as the most time consuming component of latency. Moreover, the latency analysis concludes that the time required by CEP-engine is related to the compute resources, but is nonlinear dependent of the number of things connected.
In recent decades, the field of computing has been one of the areas that has developed the most and that allows to obtain other fields in which it is needed, simulate, cure or summarize data in the field of research, business and entertainment With the passage of time, this amount of processing increases more and more, the current needs of users, for example, upload approximately 400 hours of videos, YouTube every minute, 300 million photos on Facebook daily for each event generated in CERN generates 1 Mb of data, approximately 600 million events per second, so it has been necessary to increase the computational power and as it became increasingly difficult to have this computing capacity in a single machine, we started to work implementing computers, computers in Grid, and create technologies such as Big Data for the storage and processing of data. In this paper we present the implementation of a hybrid computer cluster that includes the use of CPUs and GPUs, details the construction and is used for GPUs, as well as an analysis of the results of the performance tests executed on the cluster.
Information and communication technologies backbone of a smart city is an Internet of Things (IoT) application that combines technologies such as low power IoT networks, device management, analytics or event stream processing. Hence, designing an efficient IoT architecture for real-time IoT applications brings technical challenges that include the integration of application network protocols and data processing. In this context, the system scalability of two architectures has been analysed: the first architecture, named as POST architecture, integrates the hyper text transfer protocol with an Extract-Transform-Load technique, and is used as baseline; the second architecture, named as MQTT-CEP, is based on a publish-subscribe protocol, i.e. message queue telemetry transport, and a complex event processor engine. In this analysis, SAVIA, a smart city citizen security application, has been deployed following both architectural approaches. Results show that the design of the network protocol and the data analytic layer impacts highly in the Quality of Service experimented by the final IoT users. The experiments show that the integrated MQTT-CEP architecture scales properly, keeps energy consumption limited and thereby, promotes the development of a distributed IoT architecture based on constraint resources. The drawback is an increase in latency, mainly caused by the loosely coupled communication pattern of MQTT, but within reasonable levels which stabilize with increasing workloads.
The following work applies Machine Learning algorithms as a tool for a possible solution to the problem of citizen security in a South American city. This application aims to reduce the threat risk to the physical integrity of pedestrians through the geolocation, in real time, using safer places to walk. A database of free disposal for the user is the Open Data San Isidro, district of Lima, Peru, which has been used in the development of this work. This database keeps records of different accidents types (most of the automobile type) occurring in different places of this district, this data will be used to determine safe areas in the route from one place to another, decreasing the probability of suffering an accident. For this work, techniques of non-supervised learning algorithms of Clustering type: k-Means have been used. Likewise, a reduction of dimensions has previously been made using the Principal Component Analysis (PCA) technique.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.