Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction 2009
DOI: 10.1145/1514095.1514105
|View full text |Cite
|
Sign up to set email alerts
|

Egocentric and exocentric teleoperation interface using real-time, 3D video projection

Abstract: The user interface is the central element of a telepresence robotic system and its visualization modalities greatly affect the operator's situation awareness, and thus its performance. Depending on the task at hand and the operator's preferences, going from ego-and exocentric viewpoints and improving the depth representation can provide better perspectives of the operation environment. Our system, which combines a 3D reconstruction of the environment using laser range finder readings with two video projection … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
41
0

Year Published

2010
2010
2023
2023

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 54 publications
(42 citation statements)
references
References 19 publications
(20 reference statements)
1
41
0
Order By: Relevance
“…Additionally, primitives are classified using an understanding of viewer perspective as either egocentric, meaning the primitive represents AFF motion in a first-person interaction with the viewers themselves, or exocentric, meaning the motion represents AFF interaction with other objects or people in the environment viewed from a third-person perspective. These perspectives are adapted from work in robot perspective-taking [41] as well as teleoperation interfaces [7], which have shown the importance of considering viewer perspective in human-robot interactions. Certain primitives, such as approaching a person, can be both egocentric, as when an AFF approaches the viewer, and exocentric, as when an AFF approaches a second colocated human in the viewer's environment, and may be perceived differently across each perspective.…”
Section: Aff Motion Primitivesmentioning
confidence: 99%
“…Additionally, primitives are classified using an understanding of viewer perspective as either egocentric, meaning the primitive represents AFF motion in a first-person interaction with the viewers themselves, or exocentric, meaning the motion represents AFF interaction with other objects or people in the environment viewed from a third-person perspective. These perspectives are adapted from work in robot perspective-taking [41] as well as teleoperation interfaces [7], which have shown the importance of considering viewer perspective in human-robot interactions. Certain primitives, such as approaching a person, can be both egocentric, as when an AFF approaches the viewer, and exocentric, as when an AFF approaches a second colocated human in the viewer's environment, and may be perceived differently across each perspective.…”
Section: Aff Motion Primitivesmentioning
confidence: 99%
“…In previous work, we developed and evaluated interfaces with different viewpoints (Ferland, Pomerleau, Le Dinh, & Michaud, 2009;Michaud, Boissy, et al, 2010). We conducted a comparative study with 37 novice operators between 1) a video-centric display, 2) an augmented reality display (superimposing the video stream on a 3D virtual model of the environment), and 3) a mixed-perspective display providing an exocentric viewpoint (the 3D model) in the center of the display and an egocentric view (the video feed), with a reference between both perspectives (the robot position) .…”
Section: Augmented Teleoperation Feasibility Studymentioning
confidence: 99%
“…This system was tested terrestrially (Fong, Pangels, & Wettergreen, 1995), and derivatives were ultimately used on the Mars Pathfinder mission. Contemporary developments include more emphasis on sensor fusion (Fong, Thorpe, & Baur, 2001) as well as efforts that display appearance and geometry in a less integrated but more useable way (Ricks, Nielsen, & Goodrich, 2004) (Ferland et al, 2009).…”
Section: Related Workmentioning
confidence: 99%
“…Nielsen et al projected the image from a monocular camera into the visualized environment along with 3D positions of obstacles detected by a horizontally mounted planar lidar (Nielsen, Goodrich, & Ricks, 2007). Ferland et al developed a similar interface that replaced the projected monocular image with a 3D surface derived from a stereo vision system (Ferland et al, 2009). Neither of these approaches achieves a level of situational awareness that would allow high speed vehicle teleoperation, and they are best suited to relatively benign indoor environments.…”
Section: Discriminatorsmentioning
confidence: 99%