Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain–body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 “Neurorobotics” of the Human Brain Project (HBP).1 At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments.
The Internet of Things (IoT) is rapidly emerging as one of the dominant computing paradigms of this decade. Applications range from in-home entertainment to large-scale industrial deployments such as controlling assembly lines and monitoring traffic. While IoT devices are in many respects similar to traditional computers, user expectations and deployment scenarios as well as cost and hardware constraints are sufficiently different to create new security challenges as well as new opportunities. This is especially true for large-scale IoT deployments in which a central entity deploys and controls a large number of IoT devices with minimal human interaction.Like traditional computers, IoT devices are subject to attack and compromise. Large IoT deployments consisting of many nearly identical devices are especially attractive targets. At the same time, recovery from root compromise by conventional means becomes costly and slow, even more so if the devices are dispersed over a large geographical area. In the worst case, technicians have to travel to all devices and manually recover them. Data center solutions such as the Intelligent Platform Management Interface (IPMI) which rely on separate service processors and network connections are not only not supported by existing IoT hardware, but are unlikely to be in the foreseeable future due to the cost constraints of mainstream IoT devices.This paper presents CIDER, a system that can recover IoT devices within a short amount of time, even if attackers have taken root control of every device in a large deployment. The recovery requires minimal manual intervention. After the administrator has identified the compromise and produced an updated firmware image, he/she can instruct CIDER to force the devices to reset and to install the patched firmware on the devices. We demonstrate the universality and practicality of CIDER by implementing it on three popular IoT platforms (HummingBoard Edge, Raspberry Pi Compute Module 3 and Nucleo-L476RG) spanning the range from high to low end. Our evaluation shows that the performance overhead of CIDER is generally negligible.
With tracking setups becoming increasingly complex, it gets more difficult to find suitable algorithms for tracking, calibration and sensor fusion. A large number of solutions exists in the literature for various combinations of sensors, however, no development methodology is available for systematic analysis of tracking setups.When modeling a system as a spatial relationship graph (SRG), which describes coordinate systems and known transformations, all algorithms used for tracking and calibration correspond to certain patterns in the graph. This paper introduces a formal model for representing such spatial relationship patterns and presents a small catalog of patterns frequently used in augmented reality systems. We also describe an algorithm to identify patterns in SRGs at runtime for automatic construction of data flows networks for tracking and calibration.
Multi-touch interfaces have been a focus of research in recent years, resulting in development of various innovative UI concepts. Support for existing WIMP interfaces, however, should not be overlooked. Although several approaches exist, there is still room for improvement, particularly regarding implementation of the "hover" state, commonly used in mouse-based interfaces. In this paper, we present a multi-touch system which is designed to address this problem. A multi-touch table based on FTIR (frustrated total internal reflection) is extended with a ceiling-mounted light source to create shadows of hands and arms. By tracking these shadows with the rear-mounted camera which is already present in the FTIR setup, users can control multiple cursors without touching the table and trigger a "click" event by tapping the surface with any finger of the corresponding hand. An informal evaluation with 15 subjects found an improvement in accuracy when compared to an unaugmented touch screen.
Ubiquitous tracking setups, covering large tracking areas with many heterogeneous sensors of varying accuracy, require dedicated middleware to facilitate development of stationary and mobile applications by providing a simple interface and encapsulating the details of sensing, calibration and sensor fusion.In this paper we present a centrally coordinated peer-to-peer architecture for ubiquitous tracking, where a server computes optimal data flow configurations for sensor and application clients, which are directly exchanging tracking data with low latency using a lightweight data flow framework. The server's decisions are inferred from an actively maintained central spatial relationship graph using spatial relationship patterns.The system is compared to a previous Ubitrack implementation using the highly distributed DWARF middleware. It exhibits significantly better performance in a reference scenario. MOTIVATIONIn industrial augmented reality scenarios, there is a growing demand for integrated working environments which span large factory buildings. In such an environment, many different mobile and stationary AR-supported applications, such as logistics, production, maintenance or factory planning may coexist and require shared access to permanent tracking with varying accuracy requirements. Today, no single technology exists that satisfies the tracking requirements of all these applications and can -at least for a reasonable price -be deployed throughout such an environment. For this reason, in a realistic setup, many different tracking systems would be installed ranging from low-precision wide-area WLAN tracking to infrared-optical systems covering only small areas with high accuracy. The installation, maintenance and expansion of such a largescale heterogeneous tracking environment poses new challenges to the underlying middleware concepts.Heterogeneous wide-area tracking environments Emerging tracking methods based on technologies like WLAN or RFID provide the possibility to deploy tracking to ever-enlarging indoor areas. With increasing tracker coverage, a larger diversity of AR applications will need to share this tracking infrastructure. Stationary applications that are already in use will more and more be complemented by mobile applications that would have been completely impossible without wide-area tracking. Also, applications that are stationary today, might benefit from enlarging tracking areas and * e-mail: { huberma, pustka, keitler, echtler, klinker }@in.tum.de become more adaptive and better integrated in the productive environment. Many of these wide-area tracking systems have the drawback of being rather imprecise. Nevertheless, they serve quite well for navigation problems and can thereby bridge the gap between islands of higher tracking accuracy. Furthermore, they can provide useful initial positions to other sensors, such as markerless optical trackers [7]. There are also many examples where a fusion of measurements from different mobile and stationary sensors improves overall trac...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.