Purpose
In “human teleoperation” (HT), mixed reality (MR) and haptics are used to tightly couple an expert leader to a human follower [1]. To determine the feasibility of HT for teleultrasound, we quantify the ability of humans to track a position and/or force trajectory via MR cues. The human response time, precision, frequency response, and step response were characterized, and several rendering methods were compared.
Methods
Volunteers (
n
=11) performed a series of tasks as the follower in our HT system. The tasks involved tracking pre-recorded series of motions and forces while pose and force were recorded. The volunteers then performed frequency response tests and filled out a questionnaire.
Results
Following force and pose simultaneously was more difficult but did not lead to significant performance degradation versus following one at a time. On average, subjects tracked positions, orientations, and forces with RMS tracking errors of
mm,
,
N, steady-state errors of
mm,
N, and lags of
ms, respectively. Performance decreased with input frequency, depending on the input amplitude.
Conclusion
Teleoperating a person through MR is a novel concept with many possible applications. However, it is unknown what performance is achievable and which approaches work best. This paper thus characterizes human tracking ability in MR HT for teleultrasound, which is important for designing future tightly coupled guidance and training systems using MR.