Single-case data analysis still relies heavily on visual inspection, and, at the same time, it is not clear to what extent the results of different quantitative procedures converge in identifying an intervention effect and its magnitude when applied to the same data; this is the type of evidence provided here for two procedures. One of the procedures, included due to the importance of providing objective criteria to visual analysts, is a visual aid fitting and projecting split-middle trend while taking into account data variability. The other procedure converts several different metrics into probabilities making their results comparable. In the present study, we expore to what extend these two procedures coincide in the magnitude of intervention effect taking place in a set of studies stemming from a recent meta-analysis. The procedures concur to a greater extent with the values of the indices computed and with each other and, to a lesser extent, with our own visual analysis. For distinguishing smaller from larger effects, the probability-based approach seems somewhat better suited. Moreover, the results of the field test suggest that the latter is a reasonably good mechanism for translating different metrics into similar labels. User friendly R code is provided for promoting the use of the visual aid, together with a quantification based on nonoverlap and the label provided by the probability approach.