A virtual reality is a virtual space constructed by a computer that provides users the opportunity to indirectly experience a situation they have not experienced in real life through the realization of information for virtual environments. Various studies have been conducted to realize virtual reality, in which the user interface is a major factor in maximizing the sense of immersion and usability. However, most existing methods have disadvantages, such as costliness or being limited to the physical activity of the user due to the use of special devices attached to the user's body. This paper proposes a new type of interface that enables the user to apply their intentions and actions to the virtual space directly without special devices, and test content is introduced using the new system. Users can interact with the virtual space by throwing an object in the space; to do this, moving object detectors are produced using infrared sensors. In addition, the users can control the virtual space with their own postures. The method can heighten interest and concentration, increasing the sense of reality and immersion and maximizing user's physical experiences.
We proposed a useful data-sharing method among multi-smart devices at close range using inaudible frequencies and Wi-Fi. The existing near data-sharing methods mostly use Bluetooth technology, but these methods have the problem of being unable to be operated using different operating systems. To correct this flaw, the proposed method that uses inaudible frequencies through the inner speaker and microphone of smart device can solve the problems of the existing methods. Using the proposed method, the sending device generates trigger signals composed of inaudible sound. Moreover, smart devices that receive the signals obtain the shared data from the sending device through Wi-Fi. To evaluate the efficacy of the proposed method, we developed a near data-sharing application based on the trigger signals and conducted a performance evaluation experiment. The success rate of the proposed method was 98.8%. Furthermore, we tested the user usability of the Bump application and the proposed method and found that the proposed method is more useful than Bump. Therefore, the proposed method is an effective approach for sharing data practically among multi-smart devices at close range.
Screen climbing games have made a new category of gaming experience between a human climber and a virtual game projected onto an artificial climbing wall. Here, climbing motion recognition is required to interact with the game. In existing climbing games, motion recognition is based on a simple calculation using the depth difference between the climber’s body area and the climbing wall. However, using the body area in this way is devoid of anatomical information; thus the gaming system cannot recognize which part, or parts, of the climber’s body is in contact with the artificial climbing wall. In this paper, we present a climbing motion recognition method using anatomical information obtained by parsing a climber’s body area into its constituent anatomical parts. In ensuring that game events consider anatomical information, a climbing game can provide a more immersive experience for gamers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.