Lane changing behavior has a significant impact on traffic efficiency and may lead to traffic delays or even accidents. It is important to plan a safe and efficient lane-changing trajectory that coordinates with the surrounding environment. Most conventional lane-changing models need to establish and solve constrained optimization models during the whole process, while reinforcement learning can just take the current state as input and directly output actions to vehicles. This study develops a lane-changing model using the deep deterministic policy gradient method, which can simultaneously control the lateral and longitudinal motions of the vehicle. To optimize its performance, a reward function is properly designed by combining safety, efficiency, gap, headway, and comfort features. To avoid collisions, a safety modification model is developed to check and correct acceleration at every time step. The driving trajectory data of 1169 lane-changing scenarios extracted from the Next Generation Simulation (NGSIM) dataset are used to train and test the model. The proposed model can quickly converge in training phase. Testing results show it can complete safe and efficient lane changing in different lane-changing scenarios with both shorter time headway and lane-changing duration than human drivers. Compared with the conventional dynamic lane-changing trajectory planning model, our model can reduce collision risk. It is also evaluated in automated and nonautomated mixed traffic in SUMO. Simulation results show that the proposed model also has a positive effect on the average speed of overall traffic flow.