We describe an Augmented Reality system which allows multiple participants to interact with 2D and 3D data using tangible user interfaces. The system features face-to-face communication, collaborative viewing and manipulation of 3D models, and seamless access to 2D desktop applications within the shared 3D space. All virtual content, including 3D models and 2D desktop windows, is attached to tracked physical objects in order to leverage the efficiencies of natural two-handed manipulation. The presence of 2D desktop space within 3D facilitates data exchange between the two realms, enables control of 3D information by 2D applications, and generally increases productivity by providing access to familiar tools.We present a general concept for a collaborative tangible AR system, including a comprehensive set of interaction techniques, a distributed hardware setup, and a componentbased software architecture which can be flexibly configured using XML. We show the validity of our concept with an implementation of an application scenario from the automotive industry.
A robot navigating in an unstructured environment needs to avoid obstacles in its way and determine free spaces through which it can safely pass. We present here a set of optical-flow-based behaviors that allow a robot moving on a ground plane to perform these tasks. The behaviors operate on a purposive representation of the environment called the "virtual corridor" which is computed as follows: the images captured by a forward-facing camera rigidly attached to the robot are first remapped using a space-variant transformation. Then, optical flow is computed from the remapped image stream. Finally, the virtual corridor is extracted from the optical flow by applying simple but robust statistics. The introduction of a space-variant image preprocessing stage is inspired by biological sensory processing, where the projection and remapping of a sensory input field onto higher-level cortical areas represents a central processing mechanism. Such transformations lead to a significant data reduction, making real-time execution possible. Additionally, they serve to "re-present" the sensory data in terms of ecologically relevant features, thereby simplifying the interpretation by subsequent processing stages. In accordance with these biological principles we have designed a space-variant image transformation, called the polar sector map, which is ideally suited to the navigational task. We have validated our design with simulations in synthetic environments and in experiments with real robots.
Abstract-We present a novel system for pedestrian recognition through depth and intensity measurements. A 3D-Camera is used as main sensor, which provides depth and intensity measurements with a resolution of 64x8 pixels and a depth range of 0-20 meters.The first step consists of extracting the ground plane from the depth image by an adaptive flat world assumption. An AdaBoost head-shoulder detector is then used to generate hypotheses about possible pedestrian positions. In the last step every hypothesis is classified with AdaBoost or a SVM as pedestrian or non-pedestrian. We evaluated a number of different features known from the literature. The best result was achieved by Fourier descriptors in combination with the edges of the intensity image and an AdaBoost classifier, which resulted in a recognition rate of 83.75 percent.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.