2016 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR) 2016
DOI: 10.1109/ssrr.2016.7784305
|View full text |Cite
|
Sign up to set email alerts
|

Wearable multi-modal interface for human multi-robot interaction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
24
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3
2

Relationship

2
6

Authors

Journals

citations
Cited by 29 publications
(24 citation statements)
references
References 11 publications
0
24
0
Order By: Relevance
“…Robot-oriented interactions occur when a user must engage with individual robots, e.g., to make them into leaders other robots must follow [6], to hand-pick robots for a specific task [7], [8], [9], [10], [11], or to use a robot as tangible interface for gaming and education [12], [13]. The main advantage of these interfaces is the simplicity of their abstraction (the user becomes part of the swarm); however, with collective behaviors in which the user must interact with multiple robots, the downside of this approach is the large amount of information a user must provide to the robots (e.g., in the form of number of user commands per task).…”
Section: Related Workmentioning
confidence: 99%
“…Robot-oriented interactions occur when a user must engage with individual robots, e.g., to make them into leaders other robots must follow [6], to hand-pick robots for a specific task [7], [8], [9], [10], [11], or to use a robot as tangible interface for gaming and education [12], [13]. The main advantage of these interfaces is the simplicity of their abstraction (the user becomes part of the swarm); however, with collective behaviors in which the user must interact with multiple robots, the downside of this approach is the large amount of information a user must provide to the robots (e.g., in the form of number of user commands per task).…”
Section: Related Workmentioning
confidence: 99%
“…When the operator is deployed alongside the robot and shares its environment, one may use instead proximity interaction modalities, that assume that a direct line‐of‐sight to the robot is available; then different interfaces can be used, ranging from standard joysticks (e.g., for low‐level control of UAVs) to hands‐free gesture‐based interfaces based on sensorized armbands (Wolf, Assad, Vernacchia, Fromm, & Jethani, ), armbands (Cacace et al, ; Gromov, Gambardella, & Giusti, ), smart watches (Villani et al, ) or voice commands (Gromov, Gambardella, & Di Caro, ).…”
Section: State Of the Artmentioning
confidence: 99%
“…This is accomplished through the development of novel human–robot interfaces, and control and perception algorithms that allow human operators to dynamically switch between full autonomy and shared control as the rescue situation demands. Throughout the project, the member labs have made fundamental contributions in perception (Fankhauser et al, ; Gawel et al, ; Scaramuzza et al, ), control (Bellicoso, Jenelten, Gehring, & Hutter, 2018b; Faessler et al, ), and human–robot interaction (Gromov et al, ; Rognon et al, ), for flying (Falanga et al, ; Mintchev & Floreano, ), legged (Hutter et al, ), and amphibious robots (Horvat et al, 2017a). A recent research focus has been on field readiness and deployments in real‐world environments, and to that end, teams of flying, walking, and amphibious robots from NCCR have performed demonstrations in increasingly challenging and realistic environments, moving from indoor mock‐up scenarios (NCCR‐Demo, ), to the European Robotics League Emergency Robots Competition (ERL, ), and a week‐long event in a military rescue training facility.…”
Section: State Of the Artmentioning
confidence: 99%
“…One important issue to be solved is the perception of the operator's gestures. Perception can be the responsibility of a robot (or of a group of cooperatively-sensing robots [20,16]); of the environment [21]; or, as in our case, of a device worn by the user [11,12,22]. The first approach is the most popular in human-robot interaction (HRI) research.…”
Section: Related Workmentioning
confidence: 99%