Automated weeding is an important research area in agrorobotics. Weeds can be removed mechanically or with the precise usage of herbicides. Deep Learning techniques achieved state of the art results in many computer vision tasks, however their deployment on low-cost mobile computers is still challenging. The described system contains several novelties, compared both with its previous version and related work. It is a part of a project of the automatic weeding machine, developed by the Warsaw University of Technology and MCMS Warka Ltd. Obtained models reach satisfying accuracy (detecting 47–67% of weed area, misclasifing as weed 0.1–0.9% of crop area) at over 10 FPS on the Raspberry Pi 3B+ computer. It was tested for four different plant species at different growth stadiums and lighting conditions. The system performing semantic segmentation is based on Convolutional Neural Networks. Its custom architecture combines U-Net, MobileNets, DenseNet and ResNet concepts. Amount of needed manual ground truth labels was significantly decreased by the usage of the knowledge distillation process, learning final model which mimics an ensemble of complex models on a large database of unlabeled data. Further decrease of the inference time was obtained by two custom modifications: in the usage of separable convolutions in DenseNet block and in the number of channels in each layer. In the authors’ opinion, the described novelties can be easily transferred to other agrorobotics tasks.
Abstract. Nowadays mobile robots find application in many areas of production, public transport, security and defense, exploration of space, etc. In order to make further progress in this domain of engineering, a significant barrier has to be broken: robots must be able to understand the meaning of surrounding world. Until now, mobile robots have only perceived geometrical features of the environment. Rapid progress in sensory devices (video cameras, laser range finders, microwave radars) and sufficient computational power available on-board makes it possible to develop robot controllers that possess certain knowledge about the area of application and which are able to reason at a semantic level.The first part of the paper deals with mobile robots dedicated to operate inside buildings. A concept of the semantic navigation based upon hypergraphs is introduced. Then it is shown how semantic information, useful for mobile robots, can be extracted from the digital documentation of a building.In the second part of the paper we report the latest results on extracting semantic features from the raw data supplied by laser scanners. The aim of this research is to develop a system that will enable a mobile robot to operate in a building with ability to recognise and identify objects of certain classes. Data processing techniques involved in this system include a 3D-model of the environment updated on-line, rule-based and feature-based classifiers of objects, a path planner utilizing cellular networks and other advanced tools. Experiments carried out under real-life conditions validate the proposed solutions.
This article presents a framework for planning a drone swarm mission in a hostile environment. Elements of the planning framework are discussed in detail, including methods of planning routes for drone swarms using mixed integer linear programming (MILP) and methods of detecting potentially dangerous objects using EO/IR camera images and synthetic aperture radar (SAR). Methods of detecting objects in the field are used in the mission planning process to re-plan the swarm’s flight paths. The route planning model is discussed using the example of drone formations managed by one UAV that communicates through another UAV with the ground control station (GCS). This article presents practical examples of using algorithms for detecting dangerous objects for re-planning of swarm routes. A novelty in the work is the development of these algorithms in such a way that they can be implemented on mobile computers used by UAVs and integrated with MILP tasks. The methods of detection and classification of objects in real time by UAVs equipped with SAR and EO/IR are presented. Different sensors require different methods to detect objects. In the case of infrared or optoelectronic sensors, a convolutional neural network is used. For SAR images, a rule-based system is applied. The experimental results confirm that the stream of images can be analyzed in real-time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.