With the explosion of delay-sensitive and computation-intensive vehicular applications, traditional cloud computing has encountered enormous challenges. Vehicular edge computing, as an emerging computing paradigm, has provided powerful support for vehicular networks. However, vehicle mobility and time-varying characteristics of communication channels have further complicated the design and implementation of vehicular network systems, leading to increased delays and energy consumption. To address this problem, this article proposes a hybrid task offloading algorithm that combines deep reinforcement learning with convex optimization algorithms to improve the performance of the algorithm. The vehicle’s mobility and common signal-blocking problems in the vehicular edge computing environment are taken into account; to minimize system overhead, firstly, the twin delayed deep deterministic policy gradient algorithm (TD3) is used for offloading decision-making, with a normalized state space as the input to improve convergence efficiency. Then, the Lagrange multiplier method allocates server bandwidth to multiple users. The simulation results demonstrate that the proposed algorithm surpasses other solutions in terms of delay and energy consumption.