To safely protect workplaces and the workforce during and after the COVID-19 pandemic, a scalable integrated sensing solution is required in order to offer real-time situational awareness and early warnings for decision-makers. However, an information-based solution for industry reopening is ineffective when the necessary operational information is locked up in disparate real-time data silos. There is a lot of ongoing effort to combat the COVID-19 pandemic using different combinations of low-cost, location-based contact tracing, and sensing technologies. These ad hoc Internet of Things (IoT) solutions for COVID-19 were developed using different data models and protocols without an interoperable way to interconnect these heterogeneous systems and exchange data on people and place interactions. This research aims to design and develop an interoperable Internet of COVID-19 Things (IoCT) architecture that is able to exchange, aggregate, and reuse disparate IoT sensor data sources in order for informed decisions to be made after understanding the real-time risks in workplaces based on person-to-place interactions. The IoCT architecture is based on the Sensor Web paradigm that connects various Things, Sensors, and Datastreams with an indoor geospatial data model. This paper presents a study of what, to the best of our knowledge, is the first real-world integrated implementation of the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) and IndoorGML standards to calculate the risk of COVID-19 online using a workplace reopening case study. The proposed IoCT offers a new open standard-based information model, architecture, methodologies, and software tools that enable the interoperability of disparate COVID-19 monitoring systems with finer spatial-temporal granularity. A workplace cleaning use case was developed in order to demonstrate the capabilities of this proposed IoCT architecture. The implemented IoCT architecture included proximity-based contact tracing, people density sensors, a COVID-19 risky behavior monitoring system, and the contextual building geospatial data.
In many moving object databases, future locations of vehicles in arterial networks are predicted. While most of studies apply the frequent behavior of historical trajectories or vehicles' recent kinematics as the basis of predictions, consideration of the dynamics of the intersections is mostly neglected. Signalized intersections make vehicles experience different delays, which vary from zero to some minutes based on the traffic state at intersections. In the absence of traffic signal information (red and green times of traffic signal phases, the queue lengths, approaching traffic volume, turning volumes to each intersection leg, etc.), the experienced delays in traffic signals are random variables. In this paper, we model the probability distribution function (PDF) and cumulative distribution function (CDF) of the delay for any point in the arterial networks based on a spatiotemporal model of the queue at the intersection. The probability of the presence of a vehicle in a zone is determined based on the modeled probability function of the delay. A comparison between the results of the proposed method and a well-known kinematic-based method indicates a significant improvement in the precisions of the predictions.
Emerging deep learning (DL) approaches with edge computing have enabled the automation of rich information extraction, such as complex events from camera feeds. Due to the low speed and accuracy of object detection, some objects are missed and not detected. As objects constitute simple events, missing objects result in missing simple events, thus the number of detected complex events. As the main objective of this paper, an integrated cloud and edge computing architecture was designed and developed to reduce missing simple events. To achieve this goal, we deployed multiple smart cameras (i.e., cameras which connect to the Internet and are integrated with computerised systems such as the DL unit) in order to detect complex events from multiple views. Having more simple events from multiple cameras can reduce missing simple events and increase the number of detected complex events. To evaluate the accuracy of complex event detection, the F-score of risk behaviour regarding COVID-19 spread events in video streams was used. The experimental results demonstrate that this architecture delivered 1.73 times higher accuracy in event detection than that delivered by an edge-based architecture that uses one camera. The average event detection latency for the integrated cloud and edge architecture was 1.85 times higher than that of only one camera. However, this finding was insignificant with regard to the current case study. Moreover, the accuracy of the architecture for complex event matching with more spatial and temporal relationships showed significant improvement in comparison to the edge computing scenario. Finally, complex event detection accuracy considerably depended on object detection accuracy. Regression-based models, such as you only look once (YOLO), were able to provide better accuracy than region-based models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.