Abstract. Many tabletop systems have been developed to facilitate face-to-face collaboration and work at small meetings. These systems often require users to attach sensors to their bodies to identify their positions, but attaching a sensor to one's body can be bothersome and annoying, and user position and posture may be restricted depending on where the sensor is attached. We have proposed a technique for estimating user position in a tabletop system by image recognition and implemented a tabletop system having a user position identification function incorporating the proposed technique. This technique first obtains touch points and hand-area information from touch operations performed by the user, and establishes an association between the touch points and hand from those positional relationships. Since the direction in which a hand is extended can be derived from that hand's touch information, the position of the user of the touch points belonging to that hand can be estimated. As part of this study, we also implemented a photo-object manipulation application, which has a function for orienting a photo object to face the user based on the results of the above user-position estimation technique. We performed an experiment to evaluate the position identification rate, and found that the proposed technique could identify user position with high accuracy. In a tabletop system, objects can be oriented in various ways by touch gestures performed by multiple users, which means that a user may find it difficult to understand the text or photo of an object that is currently not facing in the user's direction. In response to this problem, several techniques have been developed to identify user positions by attaching sensors to chairs or the users themselves [8], [9], [10], and these technique are used to automatically modify the orientation of an objects according to the position of the user manipulating that objects. For example, Diamond Touch [9], registers beforehand each user and the user's position by having each user sit on a conductive sheet for user-identification purposes, and uses this information to determine the position of the user whenever the user is identified. However, identifying user position by attaching sensors to chairs or people can be troublesome, and time must be devoted to learning how to use sensor equipment. User posture may also be restricted depending on where the sensor is attached. KeywordsOn the other hand, research has been performed on an interactive system that extracts images of body extremities using image recognition technology so that physical movements performed by the user can be used as input operations [11]. This kind of interactive system using image recognition negates the need for wearing a sensor thereby enabling users to use the system in a free and natural manner.We propose a technique for estimating user position by image recognition in a tabletop system and construct a tabletop system incorporating this technique. This technique negates the need for wearing a sensor ...
A tabletop system can facilitate multi-user collaboration in a variety of settings, including small meetings, group work, and education and training exercises. The ability to identify the users touching the table and their positions can promote collaborative work among participants, so methods have been studied that involve attaching sensors to the table, chairs, or to the users themselves. An effective method of recognizing user actions without placing a burden on the user would be some type of visual process, so the development of a method that processes multi-touch gestures by visual means is desired. This paper describes the development of a multi-touch tabletop system using infrared image recognition for user position identification and presents the results of touch-gesture recognition experiments and a system-usability evaluation. Using an inexpensive FTIR touch panel and infrared light, this system picks up the touch areas and the shadow area of the user’s hand by an infrared camera to establish an association between the hand and table touch points and estimate the position of the user touching the table. The multi-touch gestures prepared for this system include an operation to change the direction of an object to face the user and a copy operation in which two users generate duplicates of an object. The system-usability evaluation revealed that prior learning was easy and that system operations could be easily performed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.