Conference Documentation International Conference on Multisensor Fusion and Integration for Intelligent Systems. MFI 2001 (Cat.
DOI: 10.1109/mfi.2001.1013539
|View full text |Cite
|
Sign up to set email alerts
|

Sensor fusion approaches for observation of user actions in programming by demonstration

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 12 publications
0
7
0
Order By: Relevance
“…For a complete review we refer to Wimmer [24] who describes different aspects of grasp sensing and a variety of sensing technologies that can be employed. We built onto work by Ehrenmann et al [5] who were among the first to use a pressure sensing data glove and a camerasystem to track hand posture, position and applied forces while users are manipulating objects. Since then, pressure sensing and motion capture approaches have been used to build more detailed models of how humans interact with objects (Saidon et al [18]) including the creation of large gesture databases with a variety of manipulation tasks involving objects ranging from kitchen utensils, to tools, mugs, jars, and toys (Verdier et al [22]).…”
Section: Grasp Sensing For Interactive Systemsmentioning
confidence: 99%
“…For a complete review we refer to Wimmer [24] who describes different aspects of grasp sensing and a variety of sensing technologies that can be employed. We built onto work by Ehrenmann et al [5] who were among the first to use a pressure sensing data glove and a camerasystem to track hand posture, position and applied forces while users are manipulating objects. Since then, pressure sensing and motion capture approaches have been used to build more detailed models of how humans interact with objects (Saidon et al [18]) including the creation of large gesture databases with a variety of manipulation tasks involving objects ranging from kitchen utensils, to tools, mugs, jars, and toys (Verdier et al [22]).…”
Section: Grasp Sensing For Interactive Systemsmentioning
confidence: 99%
“…Several articles by Ikeuchi and his coworkers [60,61,62,85,86], work by Tung and Kak [129] and contributions by Dillmann and colleagues [31,47] extend the use of vision in assembly from quality and execution control to process understanding and learning by demonstration. They all introduce systems that analyze assembly tasks demonstrated by a human and derive corresponding plans and manipulator trajectories.…”
Section: Assembly Modeling Within the Sfb 360mentioning
confidence: 99%
“…1.3(a) on page 7), 2D vision is sufficient to estimate their position and orientation. Dillmann and his coworkers [31,47] also combine different sensing strategies in their implementations of skill acquiring systems. Like Tung and Kak they process data provided by a data glove and a camera.…”
Section: Assembly Modeling Within the Sfb 360mentioning
confidence: 99%
“…Previous research has integrated two vision based systems for the purpose of high fidelity hand motion data acquisition [ 20 ]. Furthermore, various studies have integrated vision and contact based systems with the aim of aiding in the tracking of the location of a grasped object within a hand [ 21 , 22 , 23 , 24 ] or for improving the recognition of sign language and hand gestures [ 25 , 26 , 27 ]. These multi-sensor techniques supplement each other, where the separate sensors measure different aspects of the motions of the arm and hands, after which their combined data is used for higher-level feature extraction for gesture recognition [ 28 ].…”
Section: Introductionmentioning
confidence: 99%