Artificial Intelligence of Things (AIoT) has brought artificial intelligence (AI) to the cutting-edge Internet of Things (IoT). In recent years, Compressive sensing (CS), which relies on sparsity, is widely embedded and expected to bring more energy efficiency and a longer battery lifetime to IoT devices. Different from the other image compression standards, CS can get various reconstructed images by applying different reconstruction algorithms on coded data. Using this property, it is the first time to propose a deep learning-based compressive sensing image enhancement framework using multiple reconstructed signals (CSIE-M). Firstly, images are reconstructed by different CS reconstruction algorithms. Secondly, reconstructed images are assessed and sorted by a No-reference quality assessment module before being inputted to the quality enhancement module by order of quality scores. Finally, a multiple-input recurrent dense residual network is designed for exploiting and enriching the useful information from the reconstructed images. Experimental results show that CSIE-M obtains 1.88-8.07dB PSNR improvement while the state-of-the-art works achieve a 1.69-6.69 dB PSNR improvement under sampling rates from 0.125 to 0.75. On the other hand, using multiple reconstructed versions of the signal can improve 0.19-0.23 dB PSNR, and only 4% reconstructing time is increasing compared to using a reconstructed signal. Index Terms-Compressive sensing, Deep Learning approach for compressed image enhancement, multiple-to-one mapping.
Motion compensated prediction is one of the essential methods to reduce temporal redundancy in inter coding. The target of motion compensated prediction is to predict the current frame from the list of reference frames. Recent video coding standards commonly use interpolation filters to obtain sub-pixel for the best matching block located in the fractional position of the reference frame. However, the fixed filters are not flexible to adapt to the variety of natural video contents. Inspired by the success of Convolutional Neural Network (CNN) in super-resolution, we propose CNN-based fractional interpolation for Luminance (Luma) and Chrominance (Chroma) components in motion compensated prediction to improve the coding efficiency. Moreover, two syntax elements indicate interpolation methods for the Luminance and Chrominance components, have been added to bin-string and encoded by CABAC using regular mode. As a result, our proposal gains 2.9%, 0.3%, 0.6% Y, U, V BD-rate reduction, respectively, under low delay P configuration.INDEX TERMS Convolution neural network (CNN), fractional interpolation, video coding, motion compensated prediction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.