Mobile robots are increasingly being employed for performing complex tasks in dynamic environments. Reinforcement learning (RL) methods are recognized to be promising for specifying such tasks in a relatively simple manner. However, the strong dependency between the learning method and the task to learn is a well-known problem that restricts practical implementations of RL in robotics, often requiring major modifications of parameters and adding other techniques for each particular task. In this paper we present a practical core implementation of RL which enables the learning process for multiple robotic tasks with minimal per-task tuning or none. Based on value iteration methods, this implementation includes a novel approach for action selection, called Q-biased softmax regression (QBIASSR), which avoids poor performance of the learning process when the robot reaches new unexplored states. Our approach takes advantage of the structure of the state space by attending the physical variables involved (e.g., distances to obstacles, X , Y , θ pose, etc.), thus experienced sets of states may favor the decision-making process of unexplored or rarely-explored states. This improvement has a relevant role in reducing the tuning of the algorithm for particular tasks. Experiments with real and simulated robots, performed with the software framework also introduced here, show that our implementation is effectively able to learn different robotic tasks without tuning the learning method. Results also suggest that the combination of true online SARSA(λ) (TOSL) with QBIASSR can outperform the existing RL core algorithms in low-dimensional robotic tasks.
Autonomy in robotics can be seen as a continuum in which we can select different levels of collaborative control between humans and robots, ranging from tele-operation (full control by the human) to full autonomy. In many practical applications, the main issues are which intermediate levels of autonomy are available and how to change from one to another. In this paper we are interested in providing smoothly adjustable autonomy to remotely controlled mobile robots, that is, an automatic mechanism for selecting the degree of collaborative control that remains mostly unnoticed to the human in normal situations, providing him/her the maximum sense of control that is possible at every time. Adjustable autonomy has been reported previously, but not pursuing simultaneously: i) to control remotely a mobile robot at the servo-process level (i.e., direct control commands), ii) to be independent from the use of a given particular navigation algorithm, and iii) to change the autonomy level smoothly. Our solution is based on a motion demultiplexer that predicts, using the robot kinematics, target locations where the user intends to place the mobile platform in the future. Our experiments show that all the requirements of our approach can be satisfied with minimum modifications to existing robotic control software.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.