Issues such as hand and tracker jitter negatively affect user performance with the ray-casting selection technique in 3D environments. This makes it difficult for users to perform tasks that require them to select objects that have a small visible area, since small targets require high levels of precision. We introduce an approach to address this issue that uses progressive refinement of the set of selectable objects to reduce the required precision of the task. We present a design space of progressive refinement techniques and an exemplar technique called Sphere-casting refined by QUAD-menu (SQUAD). We explore the tradeoffs between progressive refinement and immediate selection techniques in an evaluation comparing SQUAD to ray-casting. Both an analytical evaluation based on a distal pointing model and an empirical evaluation demonstrate that progressive refinement selection can be better than immediate selection. SQUAD was much more accurate than ray-casting, and SQUAD was faster than ray-casting with small targets and less cluttered environments.
Team ViGIR entered the 2013 DARPA Robotics Challenge (DRC) with a focus on developing software to enable an operator to guide a humanoid robot through the series of challenge tasks emulating disaster response scenarios. The overarching philosophy was to make our operators full team members and not just mere supervisors. We designed our operator control station (OCS) to allow multiple operators to request and share information as needed to maintain situational awareness under bandwidth constraints, while directing the robot to perform tasks with most planning and control taking place onboard the robot. Given the limited development time, we leveraged a number of open source libraries in both our onboard software and our OCS design; this included significant use of the robot operating system libraries and toolchain. This paper describes the high level approach, including the OCS design and major onboard components, and it presents our DRC Trials results. The paper concludes with a number of lessons learned that are being applied to the final phase of the competition and are useful for related projects as well. C 2014 Wiley Periodicals, Inc. Kohlbrecher et al.: Human-Robot Teaming for Rescue Missions • 353independence (Huang et al., 2007). The human members of the team function as supervisors who set high-level goals, teammates who assist the robot with perception tasks, and operators who directly change robot parameters to improve performance (Scholtz, 2003); as these roles change dynamically during a set task in our system, we will use the term operator generically. Following Bruemmer et al. (2002), we rarely operate in teleoperation where we directly control a joint value, and we primarily operate in shared mode where the operator specifies tasks or goal points. In shared mode, the robot plans its motions to avoid obstacles and then executes the motion only when given permission. Even when executing a footstep plan in autonomous mode, the operator still has supervisory control of the robot and can command the robot to stop walking at any time and safely revert to a standing posture.Team ViGIR entered the DRC as a "Track B" team competing in the DARPA Virtual Robotics Challenge (VRC). Initially, Team ViGIR was composed of TORC Robotics, 2 the Simulation, Systems Optimization, and Robotics Group at Technische Universität Darmstadt (TUD), 3 and the 3D Interaction Group at Virginia Tech. 4 With only eight months from program kickoff to the first competition, the team focused on providing basic robot capabilities needed for the three tasks in the VRC. A short overview of our VRC approach is available in Kohlbrecher et al. (2013).While the tasks and requirements for the VRC were based on those anticipated in a real scenario, there were important differences: sensor noise was low and more predictable, simple friction models were used, there was no need for calibrating sensors or joint angle offsets for the robot, and the environments were known ahead of time. The dynamic model used for simulating the Atlas robot was ava...
Many types of virtual reality (VR) systems allow users to use natural, physical head movements to view a 3D environment. In some situations, such as when using systems that lack a fully surrounding display or when opting for convenient low-effort interaction, view control can be enabled through a combination of physical and virtual turns to view the environment, but the reduced realism could potentially interfere with the ability to maintain spatial orientation. One solution to this problem is to amplify head rotations such that smaller physical turns are mapped to larger virtual turns, allowing trainees to view the entire surrounding environment with small head movements. This solution is attractive because it allows semi-natural physical view control rather than requiring complete physical rotations or a fully-surrounding display. However, the effects of amplified head rotations on spatial orientation and many practical tasks are not well understood. In this paper, we present an experiment that evaluates the influence of amplified head rotation on 3D search, spatial orientation, and cybersickness. In the study, we varied the amount of amplification and also varied the type of display used (head-mounted display or surround-screen CAVE) for the VR search task. By evaluating participants first with amplification and then without, we were also able to study training transfer effects. The findings demonstrate the feasibility of using amplified head rotation to view 360 degrees of virtual space, but noticeable problems were identified when using high amplification with a head-mounted display. In addition, participants were able to more easily maintain a sense of spatial orientation when using the CAVE version of the application, which suggests that visibility of the user's body and awareness of the CAVE's physical environment may have contributed to the ability to use the amplification technique while keeping track of orientation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.