Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction 2014
DOI: 10.1145/2559636.2559646
|View full text |Cite
|
Sign up to set email alerts
|

Integrating multi-modal interfaces to command UAVs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0
1

Year Published

2016
2016
2020
2020

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 5 publications
0
3
0
1
Order By: Relevance
“…On the other hand, multimodal commands offer even more possibilities than multimodal displays. The idea is to combine voice, touch and gestures to reach simple, natural and fast interactions between operators and robots [ 24 ]. There are multiple approaches to command multi-robot missions by means of speech commands [ 25 ].…”
Section: State Of Artmentioning
confidence: 99%
“…On the other hand, multimodal commands offer even more possibilities than multimodal displays. The idea is to combine voice, touch and gestures to reach simple, natural and fast interactions between operators and robots [ 24 ]. There are multiple approaches to command multi-robot missions by means of speech commands [ 25 ].…”
Section: State Of Artmentioning
confidence: 99%
“…Las interacciones multimodales ofrecen aún más posibilidades para los comandos que para la información. La fusión de voz, tacto y gestos hace que las interacciones entre los operadores y los robots sean más fáciles, rápidas y naturales (Monajjemi et al, 2014). Los comandos de voz se han aplicado satisfactoriamente en misiones multi-robot (Kavitha et al, 2015), al igual que los comandos por gestos, usando tanto las manos (Mantecón et al, 2014) como la cara (Nagi et al, 2014).…”
Section: Interacciones Multimodalesunclassified
“…In 2013, Monajjemi et al presented a method to command a team of UAVs by using face and hand gestures [71]. Later, Monajjemi et al extended their work by commanding a team of two UAVs using not only face engagement and hand gestures but also voice and touch interfaces [70]. Similar to Monajjemi's works on multi-modal interaction above, MohaimenianPour & Vaughan [69] and Nagi et al [75] realized UAV control with hands and faces by relying on visual object detectors and simple preset rules Unlike Monajjemi's works on multi-modal interaction above, Sun et al focused on piloting a drone with gesture recognition by combining a visual tracker with a skin pixel detector for robust performance [102].…”
Section: User Interfaces For Uavsmentioning
confidence: 99%
“…Human-accompanying model should integrate functions of human approaching, following, leading, side-by-side walking, bird-eye viewing on the top for a more natural HDI. Similarly, human-sensing interface should integrate at least four modalities of human tracking, hand tracking, face tracking, and voice interaction (e.g., [70]). In the engineering field, UAVs should also perform environment sensing at the same time so that they can accompany their users without hitting obstacle (e.g., [65]).…”
Section: Guidelines and Recommendationsmentioning
confidence: 99%