This paper presents an underwater passive source localization method by forming an underdetermined linear inversion problem. The signal strength on a specified grid is evaluated using sparse reconstruction algorithms by exploiting the spatial sparsity of the source signals. Our strategy leads to a high ratio of measurements to sparsity (RMS), an increase in the peak sharpness with a low side lobe level, and minimization of the dimensionality of the problem due to the formulation of the system equation of the multiple snapshots based on the data correlation matrix. Furthermore, to reduce the computational burden, pre-locating with Bartlett is presented. Our proposed technique can perform close to Bartlet and white noise gain constraint processes in the single-source scenario, but it can give slightly better results while localizing multiple sources. It exhibits the respective characteristics of traditionally used Bartlett and white noise gain constraint methods, such as robustness to environmental/system mismatch and high resolution. Both the simulated and experimental data are processed to demonstrate the effectiveness of the method for underwater source localization.
Weeds infestation causes damage to crops and limits the agricultural production. The traditional weeds controlling methods rely on agrochemicals which demand labour-intensive practices. Various methods are proposed for the pursuit of weeds detection using multispectral images. The machine vision-based weeds detection methods require the extraction of a large number of multispectral texture features which in turn increases the computational cost. Deep neural networks are used for pixel-based weeds classification, but a drawback of these deep neural network-based weeds detection methods is that they require a large size of images dataset for network training which is time-consuming and expensive to collect particularly for multispectral images. These methods also require a Graphics Processing Unit (GPU) based system because of having high computational cost. In this article, we propose a novel weeds detection model which addresses these issues, as it does not require any kind of supervised training using labelled images and multispectral texture features extraction. The proposed model can execute on a Central Processing Unit (CPU) based system as a result its computational cost reduces. The Predictive Coding/Biased Competition-Divisive Input Modulation (PC/BC-DIM) neural network is used to determine multispectral fused saliency map which is further used to predict salient crops and detect weeds. The proposed model has achieved 94.38% mean accuracy, 0.086 mean square error, and 0.291 root mean square error.
Invertebrates are abundant in horticulture and farming environments, and can be detrimental. Early pest detection for an integrated pest-management system with an integration of physical, biological, and prophylactic methods has huge potential for the better yield of crops. Computer vision techniques with multispectral images are used to detect and classify pests in dynamic environmental conditions, such as sunlight variations, partial occlusions, low contrast, etc. Various state-of-art, deep learning approaches have been proposed, but there are some major limitations to these methods. For example, labelled images are required to supervise the training of deep networks, which is tiresome work. Secondly, a huge in-situ database with variant environmental conditions is not available for deep learning, or is difficult to build for fretful bioaggressors. In this paper, we propose a machine-vision-based multispectral pest-detection algorithm, which does not require any kind of supervised network training. Multispectral images are used as input for the proposed pest-detection algorithm, and each image provides comprehensive information about different textural and morphological features, and visible information, i.e., size, shape, orientation, color, and wing patterns for each insect. Feature identification is performed by a SURF algorithm, and feature extraction is accomplished by least median of square regression (LMEDS). Feature fusion of RGB and NIR images onto the coordinates of Ultraviolet (UV) is performed after affine transformation. The mean identification errors of type I, II, and total mean error surpass the mean errors of the state-of-art methods. The type I, II, and total mean errors, with 6.672% UV weights, were emanated to 1.62, 40.27, and 3.26, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.