2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2017
DOI: 10.1109/iros.2017.8206280
|View full text |Cite
|
Sign up to set email alerts
|

Underwater multi-robot convoying using visual tracking by detection

Abstract: Abstract-We present a robust multi-robot convoying approach that relies on visual detection of the leading agent, thus enabling target following in unstructured 3-D environments. Our method is based on the idea of tracking-by-detection, which interleaves efficient model-based object detection with temporal filtering of image-based bounding box estimation. This approach has the important advantage of mitigating tracking drift (i.e. drifting away from the target object), which is a common symptom of model-free t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
41
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 60 publications
(42 citation statements)
references
References 32 publications
0
41
0
Order By: Relevance
“…Reinforcement learning approaches have also been proposed for the IBVS setting [24], [25], [26], [27], [28], [29]. All these IBVS methods produce controllers that are tied to a single robot morphology in some way-for example, they may require visual markers on the robot [20], [21], [22], [23] or a large dataset of interactions specific to the current robot morphology and environment [24], [26], [28], [29], [30], [31], [25], [27]. In contrast, MAVRIC performs automatic selfrecognition to produce a controller that adapts to new or altered robots within a few seconds.…”
Section: Related Workmentioning
confidence: 99%
“…Reinforcement learning approaches have also been proposed for the IBVS setting [24], [25], [26], [27], [28], [29]. All these IBVS methods produce controllers that are tied to a single robot morphology in some way-for example, they may require visual markers on the robot [20], [21], [22], [23] or a large dataset of interactions specific to the current robot morphology and environment [24], [26], [28], [29], [30], [31], [25], [27]. In contrast, MAVRIC performs automatic selfrecognition to produce a controller that adapts to new or altered robots within a few seconds.…”
Section: Related Workmentioning
confidence: 99%
“…In this case, the SSD (MobileNet V2) model was re-trained on additional data and object categories for ROV and hand gestures (used for human-robot communication [15]). same models can be utilized in a wide range of underwater human-robot collaborative applications such as following a team of divers, robot convoying [5], human-robot communication [15], etc. In particular, if the application do not pose real-time constraints, we can use models such as Faster R-CNN (Inception V2) for better detection performances.…”
Section: Feasibility and General Applicabilitymentioning
confidence: 99%
“…Consequently, classical model-based detection algorithms fail to achieve good generalization performance [3,4]. On the other hand, model-free algorithms incur significant target drift [5] under such noisy conditions. Figure 1: Snapshots of a set of diverse first-person views of the robot from different diver-following scenarios.…”
Section: Introductionmentioning
confidence: 99%
“…CNN-based diver detection models have recently been investigated for underwater applications as well [16]. Once trained with sufficient data, these models are quite robust to occlusion, noise, and color distortions.…”
Section: A Autonomous Diver Followingmentioning
confidence: 99%
“…Despite the robust performance, the applicability of these models to real-time applications is often limited due to their slow running time. We refer to [16] for a detailed study on the performance and applicability of various deep visual detection models for underwater applications. In this paper, we design a CNNbased model that achieves robust detection performance in addition to ensuring that the real-time operating constraints on board an autonomous underwater robot are met.…”
Section: A Autonomous Diver Followingmentioning
confidence: 99%