This paper is concerned with the problem of planning optimal maneuver trajectories and guiding the mobile robot toward target positions in uncertain environments for exploration purposes. A hierarchical deep learning-based control framework is proposed which consists of an upper-level motion planning layer and a lower-level waypoint tracking layer. In the motion planning phase, a recurrent deep neural network (RDNN)-based algorithm is adopted to predict the optimal maneuver profiles for the mobile robot. This approach is built upon a recently-proposed idea of using deep neural networks (DNNs) to approximate the optimal motion trajectories, which has been validated that a fast approximation performance can be achieved. To further enhance the network prediction performance, a recurrent network model capable of fully exploiting the inherent relationship between preoptimized system state and control pairs is advocated. In the lower-level, a deep reinforcement learning (DRL)-based collisionfree control algorithm is established to achieve the waypoint tracking task in an uncertain environment (e.g., existence of unexpected obstacles). Since this approach allows the control policy to directly learn from human demonstration data, the time required by the training process can be significantly reduced. Moreover, a noisy prioritized experience replay (PER) algorithm is proposed to improve the exploring rate of control policy. The effectiveness of applying the proposed deep learning-based control is validated by executing a number of simulation and experimental case studies. The simulation result shows that the proposed DRL method outperforms vanilla PER algorithm in terms of training speed. Experimental videos are also uploaded and the corresponding results confirm that the proposed strategy is able to fulfill the autonomous exploration mission with improved motion planning performance, enhanced collision avoidance ability, and less training time.