2018
DOI: 10.1002/rob.21837
|View full text |Cite
|
Sign up to set email alerts
|

Understanding human motion and gestures for underwater human–robot collaboration

Abstract: In this paper, we present a number of robust methodologies for an underwater robot to visually detect, follow, and interact with a diver for collaborative task execution. We design and develop two autonomous diver-following algorithms, the first of which utilizes both spatial-and frequency-domain features pertaining to human swimming patterns in order to visually track a diver. The second algorithm uses a convolutional neural network-based model for robust tracking-by-detection. In addition, we propose a hand … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
51
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 71 publications
(58 citation statements)
references
References 47 publications
(76 reference statements)
0
51
0
Order By: Relevance
“…Visually-guided AUVs (Autonomous Underwater Vehicles) and ROVs (Remotely Operated Vehicles) are widely used in important applications such as the monitoring of marine species migration and coral reefs [39], inspection of submarine cables and wreckage [5], underwater scene analysis, seabed mapping, human-robot collaboration [24], and more. One major operational challenge for these underwater robots is that despite using high-end cameras, visual sensing is often greatly affected by poor visibility, light refraction, absorption, and scattering [31,45,24]. These optical artifacts trigger non-linear distortions in the captured images, which severely affect the performance of visionbased tasks such as tracking, detection and classification, Input Generated (a) Perceptual enhancement of underwater images.…”
Section: Introductionmentioning
confidence: 99%
“…Visually-guided AUVs (Autonomous Underwater Vehicles) and ROVs (Remotely Operated Vehicles) are widely used in important applications such as the monitoring of marine species migration and coral reefs [39], inspection of submarine cables and wreckage [5], underwater scene analysis, seabed mapping, human-robot collaboration [24], and more. One major operational challenge for these underwater robots is that despite using high-end cameras, visual sensing is often greatly affected by poor visibility, light refraction, absorption, and scattering [31,45,24]. These optical artifacts trigger non-linear distortions in the captured images, which severely affect the performance of visionbased tasks such as tracking, detection and classification, Input Generated (a) Perceptual enhancement of underwater images.…”
Section: Introductionmentioning
confidence: 99%
“…Underwater missions are often conducted by a team of human divers and autonomous robots, who cooperatively perform a set of common tasks (Islam et al, 2018c; Sattar et al, 2008). The divers typically lead the tasks and interact with the robots, which follow the divers at certain stages of the mission (Islam et al, 2018a).…”
Section: Categorization Of Autonomous Person-following Behaviorsmentioning
confidence: 99%
“…Deep learning-based object detection models have recently been investigated for underwater applications as well (Islam et al, 2018a; Shkurti et al, 2012). The state-of-the-art pre-trained models are typically trained (offline) on large underwater datasets and sometimes quantized or pruned in order to get faster inference by balancing robustness and efficiency (Islam et al, 2018a,c). As illustrated in Figure 8(f), once trained with sufficient data, these models are robust to noise and color distortions; additionally, a single model can be used to detect (and track) several objects at once.…”
Section: State-of-the-art Approachesmentioning
confidence: 99%
See 2 more Smart Citations