Image analysis is being applied in many applications including industrial automation with the Industrial Internet of Things and machine vision. The images captured by cameras, especially from the outdoor environment are impacted by various parameters such as lens blur, dirty lens, and lens distortion (barrel distortion). There exist many approaches that assess the impact of camera parameters on the quality of the images. However, most of these techniques do not use important quality assessment metrics such as Oriented FAST and Rotated BRIEF, and Structural Content. None of these techniques objectively evaluate the impact of barrel distortion on the image quality using quality assessment metrics such as Mean Square Error, Peak signal-to-noise ratio, Structural Content, Oriented FAST and Rotated BRIEF, and Structural Similarity Index. In this paper, besides lens dirtiness and blurring, we also examine the impact of barrel distortion using various types of datasets having different levels of barrel distortion. Analysis shows none of the existing metrics produces quality values consistent with intuitively defined impact levels for lens blur, dirtiness, and barrel distortion. To address the loopholes of existing metrics and make the quality assessment metric more reliable, we propose a new image quality assessment metric that fuses the quality values obtained from different metrics using a decision fusion technique known as the Dempster-Shafer theory. Our proposed metric produces quality values that are more consistent and conform with the perceptually defined camera parameter impact levels. For all the abovementioned camera impacts, our proposed metric exhibits 100% assessment reliability, which includes an enormous improvement over other metrics.
Internet of Things (IoT) images are captivating growing attention because of their wide range of applications which requires visual analysis to drive automation. However, IoT images are predominantly captured from outdoor environments and thus are inherently impacted by the camera and environmental parameters which can adversely affect corresponding applications. Deep Learning (DL) has been widely adopted in the field of image processing and computer vision and can reduce the impact of these parameters on IoT images. Albeit, there are many DL-based techniques available in the current literature for analyzing and reducing the environmental and camera impacts on IoT images. However, to the best of our knowledge, no survey paper presents state-of-the-art DL-based approaches for this purpose. Motivated by this, for the first time, we present a Systematic Literature Review (SLR) of existing DL techniques available for analyzing and reducing environmental and camera lens impacts on IoT images. As part of this SLR, firstly, we reiterate and highlight the significance of IoT images in their respective applications. Secondly, we describe the DL techniques employed for assessing the environmental and camera lens distortion impacts on IoT images. Thirdly, we illustrate how DL can be effective in reducing the impact of environmental and camera lens distortion in IoT images. Finally, along with the critical reflection on the advantages and limitations of the techniques, we also present ways to address the research challenges of existing techniques and identify some further researches to advance the relevant research areas.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.