Underwater image processing is a vast area of research to identify blurred objects in the ocean or sea environments. The light absorbed and scattered by the underwater medium create artifacts in the acquired images due to low‐light effects, poor visibility, darkness, illumination variation, and color contrast degradation. In this research work, a novel optical image preprocessing framework was proposed to restore the color contrast of images obtained from various water‐colored environments. Domain transform‐based multiscale image decomposition method was used to extract the base, residual, and detail layers. Then, the smoothed image was reconstructed from the base layer by applying the adaptive iterative backward‐projection algorithm. To obtain dehazed images, a multiscale fusion‐based algorithm was developed, which fused the detail layers and reconstructed the images. The outcome of the proposed framework revealed that the contrast degradation and low‐light‐related artifacts present in the acquired images from various water‐colored environments were found to be considerably reduced. The experimental results of the proposed method exhibit scalable improvements over the other methods in terms of visual perception and estimated performance metrics (underwater image quality measure = 6.1186; peak signal‐to‐noise ratio = 26.68; feature points matching = 96 points; semantic image segmentation accuracy = 0.8465 and processing time of 0.11 seconds on a single image). The novelty of this method relies on the fusion of well‐established techniques, which outperformed the advanced techniques like deep learning method in terms of simplicity, speed, performance, datasets, image resolution, and applications.
Coral-reefs are a significant species in marine life, which are affected by multiple diseases due to the stress and variation in heat under the impact of the ocean. The autonomous monitoring and detection of coral health are crucial for researchers to protect it at an early stage. The detection of coral diseases is a difficult task due to the inadequate coral-reef datasets. Therefore, we have developed a coral-reef benchmark dataset and proposed a Multi-scale Attention Feature Fusion Network (MAFFN) as a neck part of the YOLOv5′s network, called “MAFFN_YOLOv5”. The MAFFN_YOLOv5 model outperforms the state-of-the-art object detectors, such as YOLOv5, YOLOX, and YOLOR, by improving the detection accuracy to 8.64%, 3.78%, and 18.05%, respectively, based on the mean average precision (mAP@.5), and 7.8%, 3.72%, and 17.87%, respectively, based on the mAP@.5:.95. Consequently, we have tested a hardware-based deep neural network for the detection of coral-reef health.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.