Adaptive signal detection for scenarios with a limited number of sources of interest and background interferers (less than the number of antenna elements) can be efficiently executed using diagonally loaded covariance matrix estimates, but the resultant detectors are not strictly constant false-alarm rate (CFAR). The loss of "CFARness" means that the problem of adaptive interference mitigation and the problem of adaptive false-alarm threshold control must be treated separately, yet draw on the same collection of secondary training samples. Here we consider a "two-stage" adaptive detection scheme that optimally partitions the total sample support T into two sets: T CME data samples are used to design the adaptive filter (beamformer), then the remaining T CFAR samples are used to calculate the adaptive scalar false-alarm threshold. We present a comparative analysis of the detection performance of "one-stage" CFAR and "twostage" adaptive detectors.