Figure 1: Latency was tested both through a hardware instrumentation-based measurement (bottom left) and the new cognitive latency technique (bottom right) on four different devices. The first device, Prism, was an ad-hoc system that attached a pair of colour cameras to an Acer Windows Mixed Reality device (1 top left). This system aimed at providing a top end video see-through quality. We also tested Oculus Quest (2), Oculus Rift S (3) and the Valve Index (4). On the Bottom Left, hardware instrumentation-based measurement setup. Cameras (A and B) are synchronized to capture at the exact same time by the board (2). While (1) is a clock running at sub-millisecond accuracy. The clock for camera (B) is seen by the HMD (3), which displays the video see-through. The scene was kept well illuminated to reduce automatic exposure time problems of the HMD cameras. On the Bottom Right, a participant performing the rapid decision making task while wearing a see-through VR headset.
Figure 1: Latency was tested both through a hardware instrumentation-based measurement (bottom left) and the new cognitive latency technique (bottom right) on four different devices. The first device, Prism, was an ad-hoc system that attached a pair of colour cameras to an Acer Windows Mixed Reality device (1 top left). This system aimed at providing a top end video see-through quality. We also tested Oculus Quest (2), Oculus Rift S (3) and the Valve Index (4). On the Bottom Left, hardware instrumentation-based measurement setup. Cameras (A and B) are synchronized to capture at the exact same time by the board (2). While (1) is a clock running at sub-millisecond accuracy. The clock for camera (B) is seen by the HMD (3), which displays the video see-through. The scene was kept well illuminated to reduce automatic exposure time problems of the HMD cameras. On the Bottom Right, a participant performing the rapid decision making task while wearing a see-through VR headset.
We designed and developed a "crowdcasting" prototype to enable remote people to participate in a live event through a collection of live streams coming from the event. Viewers could select from a choice of streams and interact with the streamer and other viewers through text comments and heart reactions. We deployed the prototype in three live events: a Winterfest holiday festival, a local Women's March, and the South by Southwest festival. We found that viewers actively switched among a choice of streams from the event and actively interacted with each other, especially through text comments. Streams of walking about among exhibits or city sights elicited more user interaction than streams of lectures. We observed that voluntary viewers had a wider variation in how long they viewed the event, and switched among streams less and had less interaction through text comments compared to viewers recruited through Mechanical Turk.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.