Humans have an innate ability of environment modeling, perception, and planning while simultaneously performing tasks. However, it is still a challenging problem in the study of robotic cognition. We address this issue by proposing a neuro-inspired cognitive navigation framework, which is composed of three major components: semantic modeling framework (SMF), semantic information processing (SIP) module, and semantic autonomous navigation (SAN) module to enable the robot to perform cognitive tasks. The SMF creates an environment database using Triplet Ontological Semantic Model (TOSM) and builds semantic models of the environment. The environment maps from these semantic models are generated in an on-demand database and downloaded in SIP and SAN modules when required to by the robot. The SIP module contains active environment perception components for recognition and localization. It also feeds relevant perception information to behavior planner for safely performing the task. The SAN module uses a behavior planner that is connected with a knowledge base and behavior database for querying during action planning and execution. The main contributions of our work are the development of the TOSM, integration of SMF, SIP, and SAN modules in one single framework, and interaction between these components based on the findings of cognitive science. We deploy our cognitive navigation framework on a mobile robot platform, considering implicit and explicit constraints for autonomous robot navigation in a real-world environment. The robotic experiments demonstrate the validity of our proposed framework.
Knowledge representation in autonomous robots with social roles has steadily gained importance through their supportive task assistance in domestic, hospital, and industrial activities. For active assistance, these robots must process semantic knowledge to perform the task more efficiently. In this context, ontology-based knowledge representation and reasoning (KR & R) techniques appear as a powerful tool and provide sophisticated domain knowledge for processing complex robotic tasks in a real-world environment. In this article, we surveyed ontology-based semantic representation unified into the current state of robotic knowledge base systems, with our aim being three-fold: (i) to present the recent developments in ontology-based knowledge representation systems that have led to the effective solutions of real-world robotic applications; (ii) to review the selected knowledge-based systems in seven dimensions: application, idea, development tools, architecture, ontology scope, reasoning scope, and limitations; (iii) to pin-down lessons learned from the review of existing knowledge-based systems for designing better solutions and delineating research limitations that might be addressed in future studies. This survey article concludes with a discussion of future research challenges that can serve as a guide to those who are interested in working on the ontology-based semantic knowledge representation systems for autonomous robots.
Deep learning based models on the edge devices have received considerable attention as a promising means to handle a variety of AI applications. However, deploying the deep learning models in the production environment with efficient inference on the edge devices is still a challenging task due to computation and memory constraints. This paper proposes a framework for the service robot named GuardBot powered by Jetson Xavier NX and presents a real-world case study of deploying the optimized face mask recognition application with real-time inference on the edge device. It assists the robot to detect whether people are wearing a mask to guard against COVID-19 and gives a polite voice reminder to wear the mask. Our framework contains dual-stage architecture based on convolutional neural networks with three main modules that employ (1) MTCNN for face detection, (2) our proposed CNN model and seven transfer learning based custom models which are Inception-v3, VGG16, denseNet121, resNet50, NASNetMobile, XceptionNet, MobileNet-v2 for face mask classification, (3) TensorRT for optimization of all the models to speedup inference on the Jetson Xavier NX. Our study carries out several analysis based on the models' performance in terms of their frames per second, execution time and images per second. It also evaluates the accuracy, precision, recall & F1-score and makes the comparison of all models before and after optimization with a main focus on high throughput and low latency. Finally, the framework is deployed on a mobile robot to perform experiments in both outdoor and multi-floor indoor environments with patrolling and non-patrolling modes. Compared to other state-of-the-art models, our proposed CNN model for face mask recognition based on the classification obtains 94.5%, 95.9% and 94.28% accuracy on training, validation and testing datasets respectively which is better than MobileNet-v2, Xception and InceptionNet-v3 while it achieves highest throughput and lowest latency than all other models after optimization at different precision levels.
Advanced research in robotics has allowed robots to navigate diverse environments autonomously. However, conducting complex tasks while handling unpredictable circumstances is still challenging for robots. The robots should plan the task by understanding the working environments beyond metric information and need countermeasures against various situations. In this paper, we propose a semantic navigation framework based on a Triplet Ontological Semantic Model (TOSM) to manage various conditions affecting the execution of tasks. The framework allows robots with different kinematics to perform tasks in indoor and outdoor environments. We define the TOSM-based semantic knowledge and generate a semantic map for the domains. The robots execute tasks according to their characteristics by converting inferred knowledge to Planning Domain Definition Language (PDDL). Additionally, to make the framework sustainable, we determine a policy of maintaining the map and re-planning when in unexpected situations. The various experiments on four different kinds of robots and four scenarios validate the scalability and reliability of the proposed framework.
3D visual recognition is a prerequisite for most autonomous robotic systems operating in the real world. It empowers robots to perform a variety of tasks, such as tracking, understanding the environment, and human–robot interaction. Autonomous robots equipped with 3D recognition capability can better perform their social roles through supportive task assistance in professional jobs and effective domestic services. For active assistance, social robots must recognize their surroundings, including objects and places to perform the task more efficiently. This article first highlights the value-centric role of social robots in society by presenting recently developed robots and describes their main features. Instigated by the recognition capability of social robots, we present the analysis of data representation methods based on sensor modalities for 3D object and place recognition using deep learning models. In this direction, we delineate the research gaps that need to be addressed, summarize 3D recognition datasets, and present performance comparisons. Finally, a discussion of future research directions concludes the article. This survey is intended to show how recent developments in 3D visual recognition based on sensor modalities using deep-learning-based approaches can lay the groundwork to inspire further research and serves as a guide to those who are interested in vision-based robotics applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.