2019
DOI: 10.3390/jmse7010016
|View full text |Cite
|
Sign up to set email alerts
|

CADDY Underwater Stereo-Vision Dataset for Human–Robot Interaction (HRI) in the Context of Diver Activities

Abstract: In this article we present a novel underwater dataset collected from several field trials within the EU FP7 project "Cognitive autonomous diving buddy (CADDY)", where an Autonomous Underwater Vehicle (AUV) was used to interact with divers and monitor their activities. To our knowledge, this is one of the first efforts to collect a large dataset in underwater environments targeting object classification, segmentation and human pose estimation tasks. The first part of the dataset contains stereo camera recording… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
19
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 45 publications
(19 citation statements)
references
References 26 publications
0
19
0
Order By: Relevance
“…Underwater robots have typically relied on digital displays, purpose-built interaction devices, and lights for robot-tohuman communication, although some types of motion (e.g., diver following [32,40,48]) can be thought of as implicit communication. The primary way in which motion has been utilized for communication underwater is as a human-to-robot communication modality in hand gestures, as in the work of Islam et al [33], or various contributors to the CADDY project such as in the work of Chavez et al [15,27]. However, in terms of explicit robot-to-human communication via motion, our previous work (included in this work as Pilot I) is the first investigation of this approach.…”
Section: Motion-based Communication Modalitiesmentioning
confidence: 99%
See 1 more Smart Citation
“…Underwater robots have typically relied on digital displays, purpose-built interaction devices, and lights for robot-tohuman communication, although some types of motion (e.g., diver following [32,40,48]) can be thought of as implicit communication. The primary way in which motion has been utilized for communication underwater is as a human-to-robot communication modality in hand gestures, as in the work of Islam et al [33], or various contributors to the CADDY project such as in the work of Chavez et al [15,27]. However, in terms of explicit robot-to-human communication via motion, our previous work (included in this work as Pilot I) is the first investigation of this approach.…”
Section: Motion-based Communication Modalitiesmentioning
confidence: 99%
“…The addition of this device and the attention it requires limits the interactant's ability to continue with other physical tasks and navigate the environment. HRI research has explored a variety of methods for natural robot-human communication without additional devices, such as spoken language [16,52], facial expressions [26,55], and displays built into the robot [15,51]. Even these other more natural communication modalities can be problematic in the field.…”
mentioning
confidence: 99%
“…Due to the influence of strong absorption and scattering, some widely used devices designed to obtain in-air depth maps, such as Kinect units (Dancu et al, 2014), lidar (Churnside et al, 2017), and binocular stereo cameras (Deris et al, 2017), exhibit limited performance in underwater environments (Massot-Campos and Oliver-Codina, 2015;Pérez et al, 2020). As quite a few underwater RGB-D datasets (Akkaynak and Treibitz, 2019;Gomez Chavez et al, 2019;Berman et al, 2020) are currently available, many researchers have sought to adopt image processing methods to estimate the depth from a single monocular underwater image or a consecutive underwater image sequence. To perform single monocular underwater depth prediction, several restoration-based methods have been developed (e.g., UDCP; Drews et al, 2016;Ueda et al, 2019).…”
Section: Introductionmentioning
confidence: 99%
“…Collaborative robots are used in various applications, such as in industry (Hentout et al, 2019) for manufacturing and assembling, in robotics for rehabilitation (Aggogeri et al, 2019) nursing (Robinson et al, 2014), and in space exploration (Bernard et al, 2018). Underwater human-robot collaboration is another potential application in this field of research Mišković et al, 2015;Gomez Chavez et al, 2019). On one side, the autonomy of underwater robots is still limited, due to the fact that most of the communication and localization technology developed for on-land applications is impractical under water.…”
Section: Introductionmentioning
confidence: 99%
“…Underwater human–robot collaboration is another potential application in this field of research (GomezChavez et al, 2019; Islam et al, 2019; Mišković et al, 2015). On one side, the autonomy of underwater robots is still limited, due to the fact that most of the communication and localization technology developed for on‐land applications is impractical under water.…”
Section: Introductionmentioning
confidence: 99%