This study introduces a multi-objective deep reinforcement learning (DRL)-based adaptive transit signal priority control framework designed to enhance safety and efficiency in mixed-autonomy traffic environments. The framework utilizes real-time data from connected and automated vehicles (CAVs) to define states, actions, and rewards, with traffic conflicts serving as the safety reward and vehicle waiting times as the efficiency reward. Transit signal priority strategies are incorporated, assigning weights based on vehicle type and passenger capacity to balance these competing objectives. Simulation modeling, based on a real-world intersection in Changsha, China, evaluated the framework’s performance across multiple CAV penetration rates and weighting configurations. The results revealed that a 5:5 weight ratio for safety and efficiency achieved the best trade-off, minimizing delays and conflicts for all vehicle types. At a 100% CAV penetration rate, delays and conflicts were most balanced, with buses showing an average waiting time of 4.93 s and 0.4 conflicts per vehicle, and CAVs achieving 1.97 s and 0.49 conflicts per vehicle, respectively. In mixed traffic conditions, the framework performed best at a 75% CAV penetration rate, where buses, cars, and CAVs exhibited optimal efficiency and safety. Comparative analysis with fixed-time signal control and other DRL-based methods highlights the framework’s adaptability and robustness, supporting its application in managing mixed traffic and enabling intelligent transportation systems for future smart cities.