We introduce an innovative approach to autonomous navigation for motorized wheelchairs in shared indoor spaces with human presence, combining Deep Reinforcement Learning and Computer Vision. The central aim of this study is to enhance the well-being of individuals with disabilities who rely on such assistance for their mobility needs. Our methodology merges the Deep Deterministic Policy Gradient (DDPG) algorithm with advanced computer vision techniques, empowering motorized wheelchairs to navigate through environments that encompass both stationary and moving people. Comparative tests were conducted between the DDPG and Deep Q-Network (DQN) algorithms, spanning four distinct stages. Each stage represented two different scenarios from the training environment, characterized by complexity levels exceeding those to which the robot had been trained.
The DDPG algorithm showcased its superior efficiency and stability compared to DQN. Across all analyzed stages, DDPG exhibited higher average success rates: 98\% (Stage 01), 89\% (Stage 02), 86\% (Stage 03), and 86\% (Stage 04), demonstrating exceptional generalization capabilities and consistently outperforming the training environment in diverse settings. In contrast, DQN struggled to avoid collisions, resulting in significantly lower average success rates: 3\% (Stage 02), 14\% (Stage 03), and 29\% (Stage 04). Our findings underscore the promising potential of our proposed solution, thereby contributing to the progress of research in this particular domain.