1. ABSTRACT While many sophisticated 3D rendering methods are available to produce realistic output, 3D data input still is a tedious and time-consuming task. This paper proposes a new method for modeling 3D objects using hand gestures. First of all, a conceptual model, the so-called "image externalization loop" model, is introduced as a framework to realize an efficient 3D object creation environment. Then, a 3D shape forming method for implementing the model is described in detail. Two-handed spatial and pictographic gestures are used to describe the features of the object in shape, size and deformation pattern. The implicit superquadric functions apply to build a deformable 3D model with blending and axial deformations as their extensions. A generic hand gesture learning and recognition facility is developed and used to translate the gestures into specific superquadrics parameters to deform the object. Finally, some experimental results are shown to express the capability and usefulness of the proposed method with its potentia! application areas.
This paper proposes a new approach to collaboratively designing original products and crafted objects in a distributed virtual environment. Special attention is paid to concept formulation and image substantiation in the early design stage. A data management strategy and its implementation method are shown to effectively share and visualize a series of shape-forming and modeling operations performed by experts on a network. A 3D object representation technique is devised to manage frequently updated geometrical information by exchanging only a small amount of data among participating systems. Additionally, we contrive a method for offloading some expensive functions usually performed on a server such as multi-resolution data management and adaptive data transmission control. Client systems are delegated to execute these functions and achieve "interactivity vs. image quality" tradeoffs based on available resources and operations in a flexible and parallel fashion.
Although most curreutresearchworkon gesture interfaces deals with a one-handed gesture, a two-ha&d dyuamic gesture may have the potemial to provide the more stable audefticieut gesture interface over the one-handed. It uot only increases the amouut of iuformatiou valuable for uiiderstdudiuS complex gestures. but also euhauces expressive powers siguiticantly. We are developing an interactive twohanded Sesture iuterfdce, called TGSH (Tn*o-lzwlded Gesrm tm~irum/re~~~ SHell). enabling the users to manipulate a system by iutuitive dyuamic gestures in a 3D virtual euviromneut. Some prelimiuary experimeuts conducted by using eighteen differeutdyuamic gestures show the advautages of using two hands. TGSH is incorporated iuto a 3D geometric modeler. a VR tool developed in our laboratory. to test aud evaluate how the gesture iuterface cau improve the commuuicatiou performance between the users aud the VR applicatiou in a 3D virtual euviroumeut.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.