The Internet of Things (IoT) has penetrated deeply into our lives and the number of IoT devices per person is expected to increase substantially over the next few years. Due to the characteristics of IoT devices (i.e., low power and low battery), usage of these devices in critical applications requires sophisticated security measures. Researchers from academia and industry now increasingly exploit the concept of blockchains to achieve security in IoT applications. The basic idea of the blockchain is that the data generated by users or devices in the past are verified for correctness and cannot be tampered once it is updated on the blockchain. Even though the blockchain supports integrity and non-repudiation to some extent, confidentiality and privacy of the data or the devices are not preserved. The content of the data can be seen by anyone in the network for verification and mining purposes. In order to address these privacy issues, we propose a new privacy-preserving blockchain architecture for IoT applications based on attribute-based encryption (ABE) techniques. Security, privacy, and numerical analyses are presented to validate the proposed model.
1 Abstract -The 3D (3Dimensional) video technologies are emerging to provide more immersive media content compared to conventional 2D (2Dimensional) video applications. More often 3D video quality is measured using rigorous and time-consuming subjective evaluation test campaigns. This is due to the fact that 3D video quality can be described as a combination of several perceptual attributes such as overall image quality, perceived depth, presence, naturalness and eye strain, etc. Hence this paper investigates the relationship between subjective quality measures and several objective quality measures like PSNR, SSIM, and VQM for 3D video content. The 3D video content captured using both stereo camera pair (two cameras for left and right views) and colour-and-depth special range cameras are considered in this study. The results show that, VQM quality measures of individual left and right views (rendered left and right views for colour-and-depth sequences) can be effectively used in predicting the overall image quality and statistical measures like PSNR and SSIM of left and right views illustrate good correlations with depth perception of 3D video.
Abstract-In the near future, many conventional video applications are likely to be replaced by immersive video to provide a sense of "being there". This transition is facilitated by the recent advancement of 3-D (3-Dimensional) capture, coding, transmission, and display technologies. Stereoscopic video is the simplest form of 3-D video available in the literature. "Colour plus depth map" based stereoscopic video has attracted significant attention, as it can reduce storage and bandwidth requirements for the transmission of stereoscopic content over communication channels. However, quality assessment of coded video sequences can currently only be performed reliably using expensive and inconvenient subjective tests. To enable researchers to optimize 3-D video systems in a timely fashion, it is essential that reliable objective measures are found. This paper investigates the correlation between subjective and objective evaluation of colour plus depth video. The investigation is conducted for different compression ratios, and different video sequences. Transmission over IP (Internet Protocol) is also investigated. Subjective tests are performed to determine the image quality and depth perception of a range of differently coded video sequences, with packet loss rates ranging from 0% to 20%. The subjective results are used to determine more accurate objective quality assessment metrics for 3-D colour plus depth video.
Autonomous robots that assist humans in day to day living tasks are becoming increasingly popular. Autonomous mobile robots operate by sensing and perceiving their surrounding environment to make accurate driving decisions. A combination of several different sensors such as LiDAR, radar, ultrasound sensors and cameras are utilized to sense the surrounding environment of autonomous vehicles. These heterogeneous sensors simultaneously capture various physical attributes of the environment. Such multimodality and redundancy of sensing need to be positively utilized for reliable and consistent perception of the environment through sensor data fusion. However, these multimodal sensor data streams are different from each other in many ways, such as temporal and spatial resolution, data format, and geometric alignment. For the subsequent perception algorithms to utilize the diversity offered by multimodal sensing, the data streams need to be spatially, geometrically and temporally aligned with each other. In this paper, we address the problem of fusing the outputs of a Light Detection and Ranging (LiDAR) scanner and a wide-angle monocular image sensor for free space detection. The outputs of LiDAR scanner and the image sensor are of different spatial resolutions and need to be aligned with each other. A geometrical model is used to spatially align the two sensor outputs, followed by a Gaussian Process (GP) regression-based resolution matching algorithm to interpolate the missing data with quantifiable uncertainty. The results indicate that the proposed sensor data fusion framework significantly aids the subsequent perception steps, as illustrated by the performance improvement of a uncertainty aware free space detection algorithm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.