Autonomous mobile robots applications require a robust navigation system, which ensures the proper movement of the robot while performing their tasks. The key challenge in the navigation system is related to the indoor localization. Simultaneous Localization and Mapping (SLAM) techniques combined with Adaptive Monte Carlo Localization (AMCL) are widely used to localize robots. However, this approach is susceptible to errors, especially in dynamic environments and in presence of obstacles and objects. This paper presents an approach to improve the estimation of the indoor pose of a wheeled mobile robot in an environment. To this end, the proposed localization system integrates the AMCL algorithm with the position updates and corrections based on the artificial vision detection of fiducial markers scattered throughout the environment to reduce the errors accumulated by the AMCL position estimation. The proposed approach is based on Robot Operating System (ROS), and tested and validated in a simulation environment. As a result, an improvement in the trajectory performed by the robot was identified using the SLAM system combined with traditional AMCL corrected with the detection, by artificial vision, of fiducial markers.
I. INTRODUCTIONMobile robots are widespread in several areas, such as industrial automation, agriculture, medical care, autonomous driving, product deliveries, planetary exploration, smart warehouses, personal services, construction, reconnaissance, entertainment, emergency rescue operations, patrolling and transportation [1]. Being one of the fastest-growing scientific fields today, mobile robotics has considerable impact not only on research but also on the economy. Even with the Covid-19 pandemic scenario, the robotics market is expected to move around U$23 billion in 2021 and growing to U$54 billion in 2023 [2], pointing to a considerable expansion of autonomous mobile robots applications.Characterized as intelligent systems, mobile robots have the ability to move autonomously, without the human interference [3], making decisions based on the information collected from their sensors (e.g., LiDAR, sonar and cameras), which allows them to help humans with heavy or timeconsuming tasks [4]. However, for a mobile robot to be able