As technology scales, the increased vulnerability of modern systems due to unreliable components becomes a major problem in the era of multi-/many-core architectures. Recently, several on-line testing techniques have been proposed, aiming towards error detection of wear-out/aging-related defects that can appear during the lifetime of a system. In this work, firstly we investigate the relation between system test latency and test-time overhead in multi-/many-core systems with shared Last-Level Cache (LLC) for periodic Software-Based Self-Testing (SBST), under different test scheduling policies. Secondly, we propose a new methodology aiming to reduce the extra overhead related to testing that is incurred as the system scales up (i.e., the number of on-chip cores increases). The investigated scheduling policies primarily vary the number of cores concurrently under test in the overall system test session. Our extensive, workload-driven dynamic exploration reveals that there is an inverse relationship between the two test measures; as the number of cores concurrently under test increases, system test latency decreases, but at the cost of significantly increased test time, which sacrifices system availability for the actual workloads. Under given system test latency constraints, which dictate the recovery time in the event
Surveillance systems that capture video and audio in enterprise facilities and public places produce massive amounts of data while operating at a 24/7 mode. There is an increasing need to process, on the fly, such huge video and audio data streams to enable a quick summary of "interesting" events that are happening during a specified time frame in a particular location. Concepts like fog computing based on localisation of data processing will relax the need of existing cloud-based solutions from extensive bandwidth and processing needs at remote cloud resources, however, the abilities of data processing on the extreme edge are limited by the hardware capabilities of the devices. In this paper, we describe a novel, adaptive architecture and that builds on top of a distributed computing paradigm and is ideal for smart surveillance systems that can utilize resources at cloud, fog and edge. We provide the main architectural components, the hardware options and key software components of the system. The proposed architecture uses cloud, edge and fog computing concepts. Edge computing is realized by a camera embedded system, cloud computing with the usage of public accessible infrastructure for data processing and fog computing for the processing and data fusion of video streams in small areas.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.