This paper addresses the challenge of docking an Autonomous Underwater Vehicle (AUV) under realistic conditions. Traditional model-based controllers are often constrained by the complexity and variability of the ocean environment. To overcome these limitations, we propose a Deep Reinforcement Learning (DRL) approach to manage the homing and docking maneuver. First, we define the proposed docking task in terms of its observations, actions, and reward function, aiming to bridge the gap between theoretical DRL research and docking algorithms tested on real vehicles. Additionally, we introduce a novel observation space that combines raw noisy observations with filtered data obtained using an Extended Kalman Filter (EKF). We demonstrate the effectiveness of this approach through simulations with various DRL algorithms, showing that the proposed observations can produce stable policies in fewer learning steps, outperforming not only traditional control methods but also policies obtained by the same DRL algorithms in noise-free environments.