2022 International Conference on Robotics and Automation (ICRA) 2022
DOI: 10.1109/icra46639.2022.9812150
|View full text |Cite
|
Sign up to set email alerts
|

Self-supervised Monocular Multi-robot Relative Localization with Efficient Deep Neural Networks

Abstract: Relative localization is an important ability for multiple robots to perform cooperative tasks in GPS-denied environments. This paper presents a novel autonomous positioning framework for monocular relative localization of multiple tiny flying robots. This approach does not require any groundtruth data from external systems or manual labeling. Instead, the proposed framework is able to label real-world images with 3D relative positions between robots based on another onboard relative estimation technology, usi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(14 citation statements)
references
References 28 publications
0
14
0
Order By: Relevance
“…In the drone-to-drone (D2D) use case, we adapt PULP-Frontnet to the different task of estimating a peer drone's pose. For the same task, Li et al [15] recently proposed also a YOLOv3based [32] CNN, but a direct comparison is not possible, since neither code or data has been made public. Nonetheless, they validate their system only on a limited test set of 48 images from a single drone, with ground-truth labels acquired through a custom UWB 3D tracking system, with unspecified accuracy.…”
Section: State Of the Art Comparison And Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…In the drone-to-drone (D2D) use case, we adapt PULP-Frontnet to the different task of estimating a peer drone's pose. For the same task, Li et al [15] recently proposed also a YOLOv3based [32] CNN, but a direct comparison is not possible, since neither code or data has been made public. Nonetheless, they validate their system only on a limited test set of 48 images from a single drone, with ground-truth labels acquired through a custom UWB 3D tracking system, with unspecified accuracy.…”
Section: State Of the Art Comparison And Discussionmentioning
confidence: 99%
“…To the best of our knowledge, SoA vision-based deep learning non-egocentric approaches do not exploit information about the robot's state within their perception process. In autonomous robotics, examples include human pose estimation [2,14], tracking of peer drones [15], or gates localization to fly through them in an autonomous drone race [16,17]. Similarly, approaches for robotic arms manipulation focus on identifying and localizing an object of interest to be grasped [1,[18][19][20].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…T HE ability to estimate the position of a target robot in a video feed is crucial for many robotics tasks [1], [2], [3]. State-of-the-art (SoA) approaches use deep learning techniques based on Convolutional Neural Networks (CNNs) [4]: given a camera frame, they segment the target robot, regress the coordinates of its bounding box or its position in the image. Training these approaches to handle new robots or environments requires extensive labeled datasets, which are time-consuming and expensive to acquire, often relying on specialized hardware, e.g., motion tracking systems, to generate ground truth labels.…”
Section: Introductionmentioning
confidence: 99%
“…Virtually all state-of-the-art 2D SLAM solutions today are designed for robots with dense and accurate sensors such as laser range-finders (LiDARs). On the contrary, recent work has shown that small, agile, and cheap nano drones demonstrate potential to carry out dangerous indoor exploration missions [2] [3]. These nano drones have limited battery and carrying capacity, and it is only possible to mount low-power sensors that can only provide sparse and noisy measurements.…”
Section: Introductionmentioning
confidence: 99%