Accurately detecting early developmental stages of insect pests (larvae) from off-the-shelf stereo camera sensor data using deep learning holds several benefits for farmers, from simple robot configuration to early neutralization of this less agile but more disastrous stage. Machine vision technology has advanced from bulk spraying to precise dosage to directly rubbing on the infected crops. However, these solutions primarily focus on adult pests and post-infestation stages. This study suggested using a front-pointing red-green-blue (RGB) stereo camera mounted on a robot to identify pest larvae using deep learning. The camera feeds data into our deep-learning algorithms experimented on eight ImageNet pre-trained models. The combination of the insect classifier and the detector replicates the peripheral and foveal line-of-sight vision on our custom pest larvae dataset, respectively. This enables a trade-off between the robot’s smooth operation and localization precision in the pest captured, as it first appeared in the farsighted section. Consequently, the nearsighted part utilizes our faster region-based convolutional neural network-based pest detector to localize precisely. Simulating the employed robot dynamics using CoppeliaSim and MATLAB/SIMULINK with the deep-learning toolbox demonstrated the excellent feasibility of the proposed system. Our deep-learning classifier and detector exhibited 99% and 0.84 accuracy and a mean average precision, respectively.
During the COVID-19 Pandemic, the need for rapid and reliable alternative COVID-19 screening methods have motivated the development of learning networks to screen COVID-19 patients based on chest radiography obtained from Chest X-ray (CXR) and Computed Tomography (CT) imaging. Although the effectiveness of developed models have been documented, their adoption in assisting radiologists suffers mainly due to the failure to implement or present any applicable framework. Therefore in this paper, a robotic framework is proposed to aid radiologists in COVID-19 patient screening. Specifically, Transfer learning is employed to first develop two well-known learning networks (GoogleNet and SqueezeNet) to classify positive and negative COVID-19 patients based on chest radiography obtained from Chest X-Ray (CXR) and CT imaging collected from three publicly available repositories. A test accuracy of 90.90%, sensitivity and specificity of 94.70% and 87.20% were obtained respectively for SqueezeNet and a test accuracy of 96.40%, sensitivity and specificity of 95.50% and 97.40% were obtained respectively for GoogleNet. Consequently, to demonstrate the clinical usability of the model, it is deployed on the Softbank NAO-V6 humanoid robot which is a social robot to serve as an assistive platform for radiologists. The strategy is an end-to-end explainable sorting of X-ray images, particularly for COVID-19 patients. Laboratory-based implementation of the overall framework demonstrates the effectiveness of the proposed platform in aiding radiologists in COVID-19 screening.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.