Classical autonomous navigation systems can control robots in a collision-free manner, oftentimes with verifiable safety and explainability. When facing new environments, however, finetuning of the system parameters by an expert is typically required before the system can navigate as expected. To alleviate this requirement, the recently-proposed Adaptive Planner Parameter Learning paradigm allows robots to learn how to dynamically adjust planner parameters using a teleoperated demonstration or corrective interventions from non-expert users. However, these interaction modalities require users to take full control of the moving robot, which requires the users to be familiar with robot teleoperation. As an alternative, we introduce APPLE, Adaptive Planner Parameter Learning from Evaluative Feedback (realtime, scalar-valued assessments of behavior), which represents a less-demanding modality of interaction. Simulated and physical experiments show APPLE can achieve better performance compared to the planner with static default parameters and even yield improvement over learned parameters from richer interaction modalities.
I. INTRODUCTIONM OBILE robot navigation is a well-studied problem in the robotics community. Many classical approaches have been developed over the last several decades and several of them have been robustly deployed on physical robot platforms moving in the real world [1], [2], with verifiable guarantees of safety and explainability.However, prior to deployment in a new environment, these approaches typically require parameter re-tuning in order to achieve robust navigation performance. For example, in cluttered environments, a low velocity and high sampling rate are necessary in order for the system to be able to generate safe and smooth motions, whereas in relatively open spaces, a