Decision-making for autonomous maneuvering in dynamic, uncertain, and nonlinear environments represents a challenging frontier problem. Deep deterministic policy gradient (DDPG) is an effective method to solve such problems, but it is found that complex strategies require extensive computation and time in the learning process. To address this issue, we propose a node clustering (NC) method, inspired by grid clustering, integrated into the DDPG algorithm for the learning of complex strategies. In the NC method, the node membership degree is defined according to the specific characteristics of the maneuvering decision-making problem, and error handling strategies are designed to reduce the number of transitions in the replay database effectively, ensuring that the most typical transitions are retained. Then, combining NC and DDPG, an autonomous learning and decision-making algorithm of maneuvering is designed. The algorithm flow and the pseudo-code of the algorithm are given. Finally, the NC_DDPG algorithm is applied to a typical short-range air combat maneuvering decision problem for verification. The results show that the NC_DDPG algorithm significantly accelerates the autonomous learning and decision-making process under both balanced and disadvantageous conditions, taking only about 77% of the time required by Vector DDPG. The scale of NC impacts learning speed; the simulation results across five scales indicate that smaller clustering scales significantly increase learning time, despite a high degree of randomness. Compared with Twin Delayed DDPG (TD3), NC_DDPG consumes only 0.58% of the time of traditional TD3. After applying the NC method to TD3, NC_DDPG requires approximately 20–30% of the time of NC_TD3.