Autonomous vehicles navigating urban roads require technology that combines low latency with high computing power. The limited resources of the vehicle itself compel it to offload task requirements to edge server (ES) for processing assistance. However, as the number of vehicles continues to increase, how edge servers reasonably allocate limited resources to autonomous vehicles becomes critical to the success of urban intelligent transportation services. This paper establishes an urban road scenario with multiple autonomous vehicles and an edge computing server and considers two main driving behaviour transition resource requests, namely car‐following behaviour requests and lane‐changing behaviour requests. Simultaneously, acknowledging that vehicles may encounter unforeseen traffic hazards when switching driving behaviours, a safety redundancy setting strategy is employed to allocate additional resources to the vehicle to ensure safety and model the vehicle resource allocation problem in the autonomous driving system. Double‐deep Q‐network (DDQN) is then used to solve this model and maximize the total system utility by comprehensively considering resource costs, system revenue, and autonomous vehicle safety. Finally, results from the simulation experiment indicate that the proposed dynamic resource allocation scheme, based on deep reinforcement learning for autonomous vehicles under edge computing, not only greatly improves the system's benefits and reduces processing delays compared to traditional greedy algorithms and value iteration, but also effectively ensures security.