With more than one billion connected devices, the notion of the Internet of Things (IoT) is now gaining momentum. The mobile robot must be able to find itself in space, which is a necessary ability for autonomous navigation. Every high-level navigation operation starts with the fundamental assumption that the user is aware of both their position and the locations of other points of interest throughout the world. A robot without a sense of position can only function in a localized, reactive manner and cannot plan actions that take place outside of the immediate area of its sensory capabilities. The ubiquity of sensors and objects with robotic and autonomous systems is combined in a novel idea known as the “Internet of Robotic Things.” Computer science and mechanical engineering come together in robotics. Designing and manufacturing mechanical parts and components for robot control systems benefits from the use of mechanical engineering. Space robots and robotics are recognized as tools that can improve astronauts’ manipulation, functions, and control; as a result, they can be referred to as their artificial assistance for in situ evaluations of the conditions in space. Human-robot contact is made possible by the fact that gestures and actions are so common in robot control systems. Contrary to AI and reinforcement learning, which have been used to regulate the operation of robots in a variety of sectors, IoRT (Internet of Robotic Things), a novel subset of IoT, has the potential to track a range of robot action plans. In this research, we provide a conceptual framework to help future researchers design and simulate such a prototype. It is based on an IoRT control system that has been enhanced using reinforcement learning and AI algorithms. We also use AKF to keep track of robots and reduce noise in sensors that have been combined with the
A
∗
algorithm (adaptive Kalman filtering). It is necessary to develop and imitate this mental framework. Deep reinforcement learning is a promising approach for autonomously learning complex behaviors from little sensor data (RL). We also discuss the fundamental theoretical foundations and fundamental issues with current algorithms that limit the usage of reinforcement learning methods in practical robotics applications. We also go through some possible directions that reinforcement learning research may go in the future.