With the development of the wireless network, increasing mobile applications are emerging and receiving great popularity. These applications cover a wide area, such as traffic monitoring, smart homes, real-time vision processing, objective tracking, and so on, and typically require computation-intensive resources to achieve a high quality of experience. Although the performance of mobile devices (MDs) has been continuously enhanced, running all the applications on a single MD still causes high energy consumption and latency. Fortunately, mobile edge computing (MEC) allows MDs to offload their computation-intensive tasks to proximal eNodeBs (eNBs) to augment computational capabilities. However, the current task offloading schemes mainly concentrate on average-based performance metrics, failing to meet the deadline constraint of the tasks. Based on the deep reinforcement learning (DRL) approach, this paper proposes an Energy-aware Task Offloading with Deadline constraint (DRL-E2D) algorithm for a multi-eNB MEC environment, which is to maximize the reward under the deadline constraint of the tasks. In terms of the actor-critic framework, we integrate the action representation into DRL-E2D to handle the large discrete action space problem, i.e., using the low-complexity k-nearest neighbor as an approximate approach to extract optimal discrete actions from the continuous action space. The extensive experimental results show that DRL-E2D achieves better performance than the comparison algorithms on all parameter settings, indicating that DRL-E2D is robust to the state changes in the MEC environment.