Autonomous Underwater Vehicles (AUVs), as a member of the unmanned intelligent ocean vehicle group, can replace human beings to complete dangerous tasks in the ocean. It is of great significance to apply reinforcement learning (RL) to AUVs to realize intelligent control. This paper proposes an AUV obstacle avoidance framework based on event-triggered reinforcement learning. Firstly, an environment perception model is designed to judge the relative position relationship between the AUV and all unknown obstacles and known targets. Secondly, considering that the detection range of AUVs is limited, and the proposed method needs to deal with unknown static obstacles and unknown dynamic obstacles at the same time, two different event-triggered mechanisms are designed. Soft actor–critic (SAC) with a non-policy sampling method is used. Then, improved reinforcement learning and the event-triggered mechanism are combined in this paper. Finally, a simulation experiment of the obstacle avoidance task is carried out on the Gazebo simulation platform. Results show that the proposed method can obtain higher rewards and complete tasks successfully. At the same time, the trajectory and the distance between each obstacle confirm that the AUV can reach the target well while maintaining a safe distance from static and dynamic obstacles.