2020
DOI: 10.48550/arxiv.2011.07713
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DARE: AI-based Diver Action Recognition System using Multi-Channel CNNs for AUV Supervision

Abstract: With the growth of sensing, control and robotic technologies, autonomous underwater vehicles (AUVs) have become useful assistants to human divers for performing various underwater operations. In the current practice, the divers are required to carry expensive, bulky, and waterproof keyboards or joystick-based controllers for supervision and control of AUVs. Therefore, diver action-based supervision is becoming increasingly popular because it is convenient, easier to use, faster, and cost effective. However, th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 43 publications
0
1
0
Order By: Relevance
“…The latter has proven its usability and has already been tested in several field missions, described in [12] and in [13], and on three different robotic underwater vehicles, namely BUDDY AUV [14][15][16], R2 ROV and e-URoPe [17] (see Figure 1). Moreover, the language provides a public dataset [12,18] containing images of divers' gestures and poses, allowing the scientific community to work on optimising the framework [19][20][21]. In the existing body of literature, the development of a language that promotes effective communication and collaboration between humans and robots has been recognised as a significant challenge in the field of human-robot interaction (HRI).…”
Section: Introductionmentioning
confidence: 99%
“…The latter has proven its usability and has already been tested in several field missions, described in [12] and in [13], and on three different robotic underwater vehicles, namely BUDDY AUV [14][15][16], R2 ROV and e-URoPe [17] (see Figure 1). Moreover, the language provides a public dataset [12,18] containing images of divers' gestures and poses, allowing the scientific community to work on optimising the framework [19][20][21]. In the existing body of literature, the development of a language that promotes effective communication and collaboration between humans and robots has been recognised as a significant challenge in the field of human-robot interaction (HRI).…”
Section: Introductionmentioning
confidence: 99%