Electronic properties located on the atoms of a molecule such as partial atomic charges as well as electronegativity and polarizability values are encoded by an autocorrelation vector accounting for the constitution of a molecule. This encoding procedure is able to distinguish between compounds being dopamine agonists and those being benzodiazepine receptor agonists even after projection into a two-dimensional self-organizing network. The two types of compounds can still be distinguished if they are buried in a dataset of 8323 compounds of a chemical supplier catalog comprising a wide structural variety. The maps obtained by this sequence of events, calculation of empirical physicochemical effects, encoding in a topological autocorrelation vector, and projection by a self-organizing neural network, can thus be used for searching for structural similarity, and, in particular, for finding new lead structures with biological activity.
This paper explores a view-based approach to recognize free-form objects in range images. We are using a set of local features that are easy to calculate and robust to partial occlusions. By combining those features in a multidimensional histogram, we can obtain highly discriminant classifiers without the need for segmentation. Recognition is pegomzed using either histogram matching or a probabilistic recognition algorithm. We compare the pelfonnance of both methods in the presence of occlusions and test the system on a database of almost 2000full-sphere views of 30free-fom objects. The system achieves a recognition accuracy above 93% on ideal images, and of 89% with 20% occlusion.
Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain–body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 “Neurorobotics” of the Human Brain Project (HBP).1 At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments.
1 An efficient search algorithm is very crucial in robotic area, especially for exploration missions, where the target availability is unknown and the condition of the environment is highly unpredictable. In a very large environment, it is not sufficient to scan an area or volume by a single robot, multiple robots should be involved to perform the collective exploration. In this paper, we propose to combine bio-inspired search algorithm called Levy flight and artificial potential field method to perform an efficient searching algorithm for multi-robot applications. The main focus of this work is not only to prove the concept or to measure the efficiency of the algorithm by experiments, but also to develop an appropriate generic framework to be implemented both in simulation and on real robotic platforms. Several experiments, which compare different search algorithms, are also performed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.