We present a novel virtual reality (VR) system to measure soccer players' read-the-game ability. Read-the-game is a term that encompasses a conglomerate of visual exploratory behavioral patterns and cognitive elements required to make accurate in-game decisions. Our technological approach in the Sports Science domain focuses on the visuomotor component of targeted skill development in a VR simulation because VR is a powerful perception-action coupling training solution for visuomotor coordination due to its high sense of immersion and its psychological byproduct presence. Additionally, we analyze two critical aspects: psychological (i.e., sense of presence) and the human-computer interaction (HCI) domain (i.e., suitable input device for full-body immersion). To measure head movements related to visual explorations, the system tracks the user's head excursions. Specifically, the engaged visual exploratory activity (VEA) during a VR simulation is measured frame-byframe at runtime to study the behavior of players making passing decisions while experiencing pressure from rivals during in-game situations recreated with computer graphics (CG). Additionally, the sense of presence elicited by our system is measured via the Igroup Presence Questionnaire applied to beginner and amateur soccer players (n = 24). Regarding the HCI aspect, a comparison of input options reveals that a high presence can be achieved when using full body interactions that integrate head and body motions via a combination of an HMD and kinetic body tracking. During our system verification, a difference in the VEA performance is observed between beginner and amateur players. Moreover, we demonstrate the capacity of the system to measure the VEA while evoking immersive soccer inmatch experiences with a portable VR setup.
This paper introduces a method that uses multiple-view videos to estimate the 3D position of a badminton shuttle that moves quickly and anomalously. When an object moves quickly, it is observed with a motion blur effect. By utilizing the information provided by the shape of the motion blur region, we propose a visual tracking method for objects that have an erratic and drastically changing moving speed. When the speed increases tremendously, we propose another method, which applies the shape-from-silhouette technique, to estimate the 3D position of a moving shuttlecock using unsynchronized multiple-view videos. We confirmed the effectiveness of our proposed technique using video sequences and a CG simulation image set.
To compare a 3D preoperative planning image and fluoroscopic image, a 3D bone position estimation system that displays 3D images in response to changes in the position of fluoroscopic images was developed. The objective of the present study was to evaluate the accuracy of the estimated position of 3D bone images with reference to fluoroscopic images. Bone positions were estimated from reference points on a fluoroscopic image compared with those on a 3D image. The four reference markers positional relationships on the fluoroscopic image were compared with those on the 3D image to evaluate whether a 3D image may be drawn by tracking positional changes in the radius model. Intra-class correlations coefficients for reference marker distances between the fluoroscopic image and 3D image were 0.98–0.99. Average differences between measured values on the fluoroscopic image and 3D bone image for each marker corresponding to the direction of the bone model were 1.1 ± 0.7 mm, 2.4 ± 1.8 mm, 1.4 ± 0.8 mm, and 2.0 ± 1.6 mm in the anterior-posterior view, ulnar side lateral view, posterior-anterior view, and radial side lateral view, respectively. Marker positions were more accurate in the anterior-posterior and posterior-anterior views than in the radial and ulnar side lateral views. This system helps in real-time comparison of dynamic changes in preoperative 3D and intraoperative fluoroscopy images.
To build a robust visual tracking method it is important to consider issues such as low observation resolution and variation in the target object's shape. When we capture an object moving fast in a video camera motion blur is observed. This paper introduces a visual trajectory estimation method using blur characteristics in the 3D space. We acquire a movement speed vector based on the shape of a motion blur region. This method can extract both the position and speed of the moving object from an image frame, and apply them to a visual tracking process using Kalman filter. We estimated the 3D position of the object based on the information obtained from two different viewpoints as shown in figure 1. We evaluated our proposed method by the trajectory estimation of a badminton shuttlecock from video sequences of a badminton game.
Since over one million tourists annually visit the Angkor ruins, the effect on the buildings from the vibrations caused by these tourists is a huge problem for maintaining them. Such organisms as bryophytes, which adhere to the surface of the stones of the ruins, is another factor that damages them. Using crowdsourcing and 3D reconstruction technology, we are organizing a proactive preservation project for the Angkor Thom Bayon Temple, which is a world cultural heritage site. We evaluated its damaged parts and visualized the damaged state.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.