2006
DOI: 10.1007/11552246_48
|View full text |Cite
|
Sign up to set email alerts
|

Interactive Multi-Modal Robot Programming

Abstract: The goal of the Interactive Multi-Modal Robot Programming system is a comprehensive human-machine interface that allows non-experts to compose robot programs conveniently. Two key characteristics of this novel programming approach are that the user can provide feedback interactively at any time through an intuitive interface and that the system infers the user's intent to support interaction. The framework takes a three-step approach to the problem: multi-modal recognition, intention interpretation, and priori… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2007
2007
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 66 publications
0
5
0
Order By: Relevance
“…(2020), which applied gesture and voice based multi-channel fusion to efficiently complete a man-machine collaborative task via an interaction algorithm. A multimodal interactive robotic framework based on a three-step approach: multimodal recognition, intention interpretation and prioritized task execution was presented in Iba et al (2005). The multimodal recognition module translated hand gestures and spontaneous speech into a structured symbolic data stream without abstracting user intent.…”
Section: Multimodal Interactive Robotmentioning
confidence: 99%
See 1 more Smart Citation
“…(2020), which applied gesture and voice based multi-channel fusion to efficiently complete a man-machine collaborative task via an interaction algorithm. A multimodal interactive robotic framework based on a three-step approach: multimodal recognition, intention interpretation and prioritized task execution was presented in Iba et al (2005). The multimodal recognition module translated hand gestures and spontaneous speech into a structured symbolic data stream without abstracting user intent.…”
Section: Multimodal Interactive Robotmentioning
confidence: 99%
“…A multimodal interactive robotic framework based on a three-step approach: multimodal recognition, intention interpretation and prioritized task execution was presented in Iba et al . (2005).…”
Section: Multimodal Interactions and Communicationsmentioning
confidence: 99%
“…Usually, mechatronic systems use predefined interaction language specifically oriented to a particular application; an interesting experience among them is described by Iba et al [10]. They use multimodal interaction between humans and robots in order to describe a framework to compose robot programs by non expert people.…”
Section: Related Workmentioning
confidence: 99%
“…Perzanowski et al (2001) use a speech, gesture, and graphical interface to study natural interactions between humans and robots. Iba, Paredis, and Khosla (2002) have developed a system that incorporates speech and gesture, instead of a keyboard and joystick, to program a vacuum-cleaning mobile robot. These types of systems target users who are experts at a particular task but may have limited programming ability.…”
Section: Related Workmentioning
confidence: 99%