Ingress task is crucial in a humanoid robot's attempt to drive a land vehicle and reach its destination fast. Previous work is inefficient in granting robots the ability to enter a vehicle from random starting position and orientation or withstand elasticity in vehicles, which are both hard to model. Deep Reinforcement Learning (DRL) could be introduced to address these issues. Previous applications of DRL in humanoid control tend to use consistent reward terms for the whole control process, which is not suitable for the ingress task with many distinctive states.
This letter proposes a novel Finite State Machine control method integrated with Deep Reinforcement Learning for the humanoid ingress task. It collects the robot's status at the end of each state and immediately adjusts its next move. It has 97 percent ingress success rate with random initial displacement and vehicle elasticity in simulation.