Energy autonomy is an important aspect that needs to be improved in order to increase efficiency in mobile robotic tasks. Having accurate power models allows the estimation of energy consumption along different trajectories. This article proposes a power model for two-wheel differential drive mobile robots. The proposed model takes into account the dynamic parameters of the robot and its motors, and predicts the energy consumption for trajectories with variable accelerations and variable payloads. The experimental validation of the proposed model was performed with a Nomad Super Scout II mobile robot which was driven along straight and curved trajectories, with different payloads and accelerations. The experiments using the proposed model showed accuracies of 96.67% along straight trajectories and 81.25% along curved trajectories in the estimation of energy consumption.
In this work is proposed a new fully automated methodology using computer vision and dynamic programming to obtain a 3D reconstruction model of surfaces using scanning electron microscope (SEM) images based on stereovision. The horizontal stereo matching step is done with a robust and efficient algorithm based on semi-global matching. The cost function used in this study is very simple since the brightness and contrast change of corresponding pixels is negligible for the small tilt involved in stereo SEM. It is used a sum of absolute differences (SAD) over a variable pixel size window. Since it relies on dynamic programming, the matching algorithm uses an occlusion parameter which penalizes large depth discontinuities and, in practice, smooths the disparity map and the corresponding reconstructed surface. This step yields a disparity map, i.e. the differences between the horizontal coordinates of the matching points in the stereo images. The horizontal disparity map is finally converted into heights according to the SEM acquisition parameters: tilt angle, image magnification and pixel size. A validation test was first performed using as reference a microscopic grid with manufacturer specifications. Finally, with the 3D model are proposed some applications in materials science as roughness parameters estimation and wear measurements.
Automatic crop identification and monitoring is a key element in enhancing food production processes as well as diminishing the related environmental impact. Although several efficient deep learning techniques have emerged in the field of multispectral imagery analysis, the crop classification problem still needs more accurate solutions. This work introduces a competitive methodology for crop classification from multispectral satellite imagery mainly using an enhanced 2D convolutional neural network (2D-CNN) designed at a smaller-scale architecture, as well as a novel post-processing step. The proposed methodology contains four steps: image stacking, patch extraction, classification model design (based on a 2D-CNN architecture), and post-processing. First, the images are stacked to increase the number of features. Second, the input images are split into patches and fed into the 2D-CNN model. Then, the 2D-CNN model is constructed within a small-scale framework, and properly trained to recognize 10 different types of crops. Finally, a post-processing step is performed in order to reduce the classification error caused by lower-spatial-resolution images. Experiments were carried over the so-named Campo Verde database, which consists of a set of satellite images captured by Landsat and Sentinel satellites from the municipality of Campo Verde, Brazil. In contrast to the maximum accuracy values reached by remarkable works reported in the literature (amounting to an overall accuracy of about 81%, a f1 score of 75.89%, and average accuracy of 73.35%), the proposed methodology achieves a competitive overall accuracy of 81.20%, a f1 score of 75.89%, and an average accuracy of 88.72% when classifying 10 different crops, while ensuring an adequate trade-off between the number of multiply-accumulate operations (MACs) and accuracy. Furthermore, given its ability to effectively classify patches from two image sequences, this methodology may result appealing for other real-world applications, such as the classification of urban materials.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.