This paper investigates the impact of trajectory predictor performance on the encounter probability generated by an adaptive conflict detection tool and examines the flexibility of the tool dependent on its adjustable thresholds, using historical radar track data. To achieve these objectives, two figures of merit were proposed: System Dynamic Range and System Tuning Envelope. To examine the conflict detection’s performance variability under different uncertainty levels and predictor types, simple multi-horizon trajectory predictors trained with two machine learning techniques of different characteristics are assessed at various look-ahead times: extreme gradient boosting with a discrete nature and a multi-layer perceptron regressor with a continuous nature. The results highlight the interdependence between the performances of the trajectory predictor and the conflict detector, and the quantification of this relationship can be represented through a sigmoid function. In addition, the two proposed figures of merit are effective for selecting suitable operating points in an adaptive conflict detector, based on dynamic thresholds and the performance requirements necessary for the trajectory predictors to achieve the expected detection performance at different look-ahead time.