The advancement of the Internet of Autonomous Vehicles has facilitated the development and deployment of numerous onboard applications. However, the delay-sensitive tasks generated by these applications present enormous challenges for vehicles with limited computing resources. Moreover, these tasks are often interdependent, preventing parallel computation and severely prolonging completion times, which results in substantial energy consumption. Task-offloading technology offers an effective solution to mitigate these challenges. Traditional offloading strategies, however, fall short in the highly dynamic environment of the Internet of Vehicles. This paper proposes a task-offloading scheme based on deep reinforcement learning to optimize the strategy between vehicles and edge computing resources. The task-offloading problem is modeled as a Markov Decision Process, and an improved twin-delayed deep deterministic policy gradient algorithm, LT-TD3, is introduced to enhance the decision-making process. The integration of LSTM and a self-attention mechanism into the LT-TD3 network boosts its capability for feature extraction and representation. Additionally, considering task dependency, a topological sorting algorithm is employed to assign priorities to subtasks, thereby improving the efficiency of task offloading. Experimental results demonstrate that the proposed strategy significantly reduces task delays and energy consumption, offering an effective solution for efficient task processing and energy saving in autonomous vehicles.