The proliferation of low-cost, small radar cross-section UAVs (unmanned aerial vehicles) necessitates innovative solutions for countering them. Since these UAVs typically operate with a radio control link, a promising defense technique involves passive scanning of the radio frequency (RF) spectrum to detect UAV control signals. This approach is enhanced when integrated with machine-learning (ML) and deep-learning (DL) methods. Currently, this field is actively researched, with various studies proposing different ML/DL architectures competing for optimal accuracy. However, there is a notable gap regarding robustness, which refers to a UAV detector’s ability to maintain high accuracy across diverse scenarios, rather than excelling in just one specific test scenario and failing in others. This aspect is critical, as inaccuracies in UAV detection could lead to severe consequences. In this work, we introduce a new dataset specifically designed to test for robustness. Instead of the existing approach of extracting the test data from the same pool as the training data, we allowed for multiple categories of test data based on channel conditions. Utilizing existing UAV detectors, we found that although coefficient classifiers have outperformed CNNs in previous works, our findings indicate that image classifiers exhibit approximately 40% greater robustness than coefficient classifiers under low signal-to-noise ratio (SNR) conditions. Specifically, the CNN classifier demonstrated sustained accuracy in various RF channel conditions not included in the training set, whereas the coefficient classifier exhibited partial or complete failure depending on channel characteristics.