This work presents a novel, comprehensive framework that leverages emerging augmented reality headset technology to enable smart nuclear industrial infrastructure that a human can easily interact with to improve their performance in terms of safety, security, and productivity. Nuclear industrial operations require some of the most complicated infrastructure that must be managed today. Nuclear infrastructure and their associated industrial operations typically features stringent requirements associated with seismic, personnel management (e.g., access control, equipment access), safety (e.g., radiation, criticality, mechanical, electrical, spark, and chemical hazards), security (cyber/physical), and sometimes international treaties for nuclear non-proliferation. Furthermore, a wide variety of manufacturing and maintenance operations take place within these facilities further complicating their management. Nuclear facilities require very thorough and stringent documentation of the operations occurring within these facilities as well as maintaining a tight chain-of-custody for the materials being stored within the facility. The emergence of augmented reality and a variety of Internet of Things (IoT) devices offers a possible solution to help mitigate these challenges. This work provides a demonstration of a prototype smart nuclear infrastructure system that leverages augmented reality to illustrate the advantages of this system. It will also present example augmented reality tools that can be leveraged to create the next generation of smart nuclear infrastructure. The discussion will layout future directions of research for this class of work.
Video-based techniques for identification of structural dynamics have the advantage that they are very inexpensive to deploy compared to conventional accelerometer or strain gauge techniques. When structural dynamics from video is accomplished using full-field, high-resolution analysis techniques utilizing algorithms on the pixel time series such as principal components analysis and solutions to blind source separation the added benefit of high-resolution, full-field modal identification is achieved. An important property of video of vibrating structures is that it is particularly sparse. Typically video of vibrating structures has a dimensionality consisting of many thousands or even millions of pixels and hundreds to thousands of frames. However the motion of the vibrating structure can be described using only a few mode shapes and their associated time series. As a result, emerging techniques for sparse and random sampling such as compressive sensing should be applicable to performing modal identification on video. This work presents how full-field, high-resolution, structural dynamics identification frameworks can be coupled with compressive sampling. The techniques described in this work are demonstrated to be able to recover mode shapes from experimental video of vibrating structures when 70% to 90% of the frames from a video captured in the conventional manner are removed.
Event-driven neuromorphic imagers have a number of attractive properties including low-power consumption, high dynamic range, the ability to detect fast events, low memory consumption and low band-width requirements. One of the biggest challenges with using event-driven imagery is that the field of event data processing is still embryonic. In contrast, decades worth of effort have been invested in the analysis of frame-based imagery. Hybrid approaches for applying established frame-based analysis techniques to event-driven imagery have been studied since event-driven imagers came into existence. However, the process for forming frames from event-driven imagery has not been studied in detail. This work presents a principled digital coded exposure approach for forming frames from event-driven imagery that is inspired by the physics exploited in a conventional camera featuring a shutter. The technique described in this work provides a fundamental tool for understanding the temporal information content that contributes to the formation of a frame from event-driven imagery data. Event-driven imagery allows for the application of arbitrary virtual digital shutter functions to form the final frame on a pixel-by-pixel basis. The proposed approach allows for the careful control of the spatio-temporal information that is captured in the frame. Furthermore, unlike a conventional physical camera, event-driven imagery can be formed into any variety of possible frames in post-processing after the data is captured. Furthermore, unlike a conventional physical camera, coded-exposure virtual shutter functions can assume arbitrary values including positive, negative, real, and complex values. The coded exposure approach also enables the ability to perform applications of industrial interest such as digital stroboscopy without any additional hardware. The ability to form frames from event-driven imagery in a principled manner opens up new possibilities in the ability to use conventional frame-based image processing techniques on event-driven imagery.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.