Model‐free deep reinforcement learning (DRL) is regarded as an effective approach for multi‐target cognitive electronic reconnaissance (MCER) missions. However, DRL networks with poor generalisation can significantly reduce mission completion rates when parameters such as reconnaissance area size, target number, and platform speed vary slightly. To address this issue, this paper introduces a novel scene reconstruction method for MCER missions and a mission group adaptive transfer deep reinforcement learning (MTDRL) algorithm. The algorithm enables quick adaptation of reconnaissance strategies for varied mission scenes by transferring strategy templates and compressing multi‐target perception states. To validate the method, the authors developed a transfer learning model for unmanned aerial vehicle (UAV) MCER. Three sets of experiments are conducted by varying the reconnaissance area size, the target number, and the platform speed. The results show that the MTDRL algorithm outperforms two commonly used DRL algorithms, with an 18% increase in mission completion rate and a 5.49 h reduction in training time. Furthermore, the mission completion rate of the MTDRL algorithm is much higher than that of a typical non‐DRL algorithm. The UAV demonstrates stable hovering and repeat reconnaissance behaviours at the radar detection boundary, ensuring flight safety during missions.