Utilizing the collected experience tuples in the replay buffer (RB) is the primary way of exploiting the experiences in the off-policy reinforcement learning (RL) algorithms, and, therefore, the sampling scheme for the experience tuples in the RB can be critical for experience utilization. In this paper, it is found that a widely used sampling scheme in the off-policy RL suffers from inefficiency due to the inadequate uneven sampling of experience tuples from the RB. In fact, the conventional uniform sampling of the experience tuples in the RB causes a severely unbalanced experience utilization, since experiences stored earlier in the RB is sampled with much higher frequency especially in the early stage of learning. We mitigate this fundamental problem by employing a half-normal sampling probability window that allocates a higher sampling probability to newer experiences in the RB. In addition, we propose general and local size adjustment schemes that determine the standard deviation of the half-normal sampling window to enhance the learning speed and performance and to mitigate the temporary performance degradation during training, respectively. For performance demonstration, we apply the proposed sampling technique to the state-of-the-art off-policy RL algorithms and test for various RL benchmark tasks such as MuJoCo gym and CARLA simulator. As a result, the proposed technique shows considerable learning speed and final performance improvement, especially on the tasks with large state and action space. Furthermore, the proposed sampling technique increases the stability of the considered RL algorithms, verified with less variance of the performance results across different random seeds of network initialization.