Abstract-This paper proposes a simulation-based active policy learning algorithm for finite-horizon, partially-observed sequential decision processes. The algorithm is tested in the domain of robot navigation and exploration under uncertainty, where the expected cost is a function of the belief state (filtering distribution). This filtering distribution is in turn nonlinear and subject to discontinuities, which arise because constraints in the robot motion and control models. As a result, the expected cost is non-differentiable and very expensive to simulate. The new algorithm overcomes the first difficulty and reduces the number of simulations as follows. First, it assumes that we have carried out previous evaluations of the expected cost for different corresponding policy parameters. Second, it fits a Gaussian process (GP) regression model to these values, so as to approximate the expected cost as a function of the policy parameters. Third, it uses the GP predicted mean and variance to construct a statistical measure that determines which policy parameters should be used in the next simulation. The process is iterated using the new parameters and the newly gathered expected cost observation. Since the objective is to find the policy parameters that minimize the expected cost, this active learning approach effectively trades-off between exploration (where the GP variance is large) and exploitation (where the GP mean is low). In our experiments, a robot uses the proposed method to plan an optimal path for accomplishing a set of tasks, while maximizing the information about its pose and map estimates. These estimates are obtained with a standard filter for SLAM. Upon gathering new observations, the robot updates the state estimates and is able to replan a new path in the spirit of openloop feedback control.