2018 IEEE International Conference on Robotics and Automation (ICRA) 2018
DOI: 10.1109/icra.2018.8461197
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Reconfiguration of Mission Parameters in Underwater Human-Robot Collaboration

Abstract: This paper presents a real-time programming and parameter reconfiguration method for autonomous underwater robots in human-robot collaborative tasks. Using a set of intuitive and meaningful hand gestures, we develop a syntactically simple framework that is computationally more efficient than a complex, grammar-based approach. In the proposed framework, a convolutional neural network is trained to provide accurate hand gesture recognition; subsequently, a finite-state machine-based deterministic model performs … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
29
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
7
2

Relationship

4
5

Authors

Journals

citations
Cited by 35 publications
(29 citation statements)
references
References 21 publications
0
29
0
Order By: Relevance
“…To avoid this, onboard systems to enable human-to-robot communication are preferable. Examples of systems of communication which do not require an additional device include the use of fiducial tags [23] [9] or of hand gestures [12] [7], which can be recognized and interpreted onboard a robot.…”
Section: Diver Communicationmentioning
confidence: 99%
“…To avoid this, onboard systems to enable human-to-robot communication are preferable. Examples of systems of communication which do not require an additional device include the use of fiducial tags [23] [9] or of hand gestures [12] [7], which can be recognized and interpreted onboard a robot.…”
Section: Diver Communicationmentioning
confidence: 99%
“…As demonstrated in Figure 10, by simply re-training on additional data and object categories, the diver:97% ROV:99% diver:96% OK:93% zero:79% Figure 10: Detection of ROVs and hand gestures by the same diver-detector model. In this case, the SSD (MobileNet V2) model was re-trained on additional data and object categories for ROV and hand gestures (used for human-robot communication [15]). same models can be utilized in a wide range of underwater human-robot collaborative applications such as following a team of divers, robot convoying [5], human-robot communication [15], etc.…”
Section: Feasibility and General Applicabilitymentioning
confidence: 99%
“…In this case, the SSD (MobileNet V2) model was re-trained on additional data and object categories for ROV and hand gestures (used for human-robot communication [15]). same models can be utilized in a wide range of underwater human-robot collaborative applications such as following a team of divers, robot convoying [5], human-robot communication [15], etc. In particular, if the application do not pose real-time constraints, we can use models such as Faster R-CNN (Inception V2) for better detection performances.…”
Section: Feasibility and General Applicabilitymentioning
confidence: 99%
“…Among those regarding HRI in underwater environments, we cite authors in [14,15], who developed the Robochat language providing Backus-Naur form (BNF) productions. However, the language developed was based on fiduciary markers such as ARTags [16] and Fourier tags [17], lacking the simplicity and instinctivity of gestures [18,19]. Furthermore, authors in [20] developed a programming language for AUV with essential instructions for mission control with the given grammar similar to the assembly language: in this case, the interaction between divers and robots is missing and the use of assembly language seems to be overly complex and hard to remember.…”
Section: State Of the Artmentioning
confidence: 99%