In the last decades, ego-motion estimation or visual odometry (VO) has received a considerable amount of attention from the robotic research community, mainly due to its central importance in achieving robust localization and, as a consequence, autonomy. Different solutions have been explored, leading to a wide variety of approaches, mostly grounded on geometric methodologies and, more recently, on data-driven paradigms. To guide researchers and practitioners in choosing the best VO method, different benchmark studies have been published. However, the majority of them compare only a small subset of the most popular approaches and, usually, on specific data sets or configurations. In contrast, in this work, we aim to provide a complete and thorough study of the most popular and best-performing geometric and data-driven solutions for VO. In our investigation, we considered several scenarios and environments, comparing the estimation accuracies and the role of the hyper-parameters of the approaches selected, and analyzing the computational resources they require. Experiments and tests are performed on different data sets (both publicly available and self-collected) and two different computational boards. The experimental results show pros and cons of the tested approaches under different perspectives. The geometric simultaneous localization and mapping methods are confirmed to be the best performing, while data-driven approaches show robustness with respect to nonideal conditions present in more challenging scenarios.
In this paper we propose a new framework to categorize social interactions in egocentric videos, we named Interac-tionGCN. Our method extracts patterns of relational and nonrelational cues at the frame level and uses them to build a relational graph from which the interactional context at the frame level is estimated via a Graph Convolutional Network (GCN) based approach. Then it propagates this context over time, together with first-person motion information, through a Gated Recurrent Unit architecture. Ablation studies and experimental evaluation on two publicly available datasets validate the proposed approach and establish state of the art results. 1
The availability of real-world data in agricultural applications is of paramount importance to develop robust and effective robotic-based solutions for farming operations. In this application context, however, very few data sets are available to the community and for some important crops, such as grapes and olives, they are almost absent. Therefore, the aim of this paper is to introduce and release ARD-VO, a data set for agricultural robotics applications focused on vineyards and olive cultivations. Its main purpose is to provide the researchers with a real-world extensive set of data to support the development of solutions and algorithms for precision farming technologies in the aforementioned crops.ARD-VO has been collected with an unmanned ground vehicle (UGV) equipped with different heterogeneous sensors that capture information essential for robot localization and plant monitoring tasks. It is composed of sequences gathered in 11 experimental sessions between August and October 2021, navigating the UGV for several kilometers in four cultivation fields in Umbria, a central region of Italy. In addition, to highlight the utility of ARD-VO, two application case studies are presented. In the first one, the data set is used to compare the performance of simultaneous localization and mapping and odometry estimation methods using vision systems, light detection and ranging, and inertial measurement unit sensors. The second one shows how the multispectral images included in ARD-VO can be used to compute Normalized Difference Vegetation Index maps, which are crucial to monitor the crops and build prescription maps.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.