Being able to explore an environment and understand the location and type of all objects therein is important for indoor robotic platforms that must interact closely with humans. However, it is difficult to evaluate progress in this area due to a lack of standardized testing which is limited due to the need for active robot agency and perfect object ground-truth. To help provide a standard for testing scene understanding systems, we present a new robot vision scene understanding challenge using simulation to enable repeatable experiments with active robot agency. We provide two challenging task types, three difficulty levels, five simulated environments and a new evaluation measure for evaluating 3D cuboid object maps. Our aim is to drive state-of-the-art research in scene understanding through enabling evaluation and comparison of active robotic vision systems.
Abstract-This paper presents image-based navigation from an image memory using a combination of line segments and feature points. The environment is represented by a set of key images, which are acquired during a prior mapping phase that defines the path to be followed during the navigation. The switching of key images is done exploiting the common line segments and feature points between the current acquired image and the nearby key images. Based on the key images and the current image, a control law is derived for computing the rotational velocity of a mobile robot during its visual navigation. Using our approach, real-time navigation has been performed in real indoor environment with a Pioneer 3-DX equipped with an on-board perspective camera and the humanoid robot Pepper without the need of accurate mapping and localization nor of 3D reconstruction. We also show that the combination of points and lines increases the number of features that helps in robust and successful navigation especially in those regions where few points or lines can be detected and tracked/matched.
Mobile phone induced electromagnetic field (MPEMF) as well as chanting of Vedic mantra 'OM' has been shown to affect cognition and brain haemodynamics, but findings are still inconclusive. Twenty right-handed healthy teenagers (eight males and 12 females) in the age range of 18.25 ± 0.44 years were randomly divided into four groups: (1) MPONOM (mobile phone 'ON' followed by 'OM' chanting); (2) MPOFOM (mobile phone 'OFF' followed by 'OM' chanting); (3) MPONSS (mobile phone 'ON' followed by 'SS' chanting); and (4) MPOFSS (mobile phone 'OFF' followed by 'SS' chanting). Brain haemodynamics during Stroop task were recorded using a 64-channel fNIRS device at three points of time: (1) baseline, (2) after 30 min of MPON/OF exposure, and (3) after 5 min of OM/SS chanting. RM-ANOVA was applied to perform within- and between-group comparisons, respectively. Between-group analysis revealed that total scores on incongruent Stroop task were significantly better after OM as compared to SS chanting (MPOFOM vs MPOFSS), pre-frontal activation was significantly lesser after OM as compared to SS chanting in channel 13. There was no significant difference between MPON and MPOF conditions for Stroop performance, as well as brain haemodynamics. These findings need confirmation through a larger trial in future.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.