Two-point discrimination is widely used to measure tactile spatial acuity. The validity of the two-point threshold as a spatial acuity measure rests on the assumption that two points can be distinguished from one only when the two points are sufficiently separated to evoke spatially distinguishable foci of neural activity. However, some previous research has challenged this view, suggesting instead that two-point task performance benefits from an unintended non-spatial cue, allowing spuriously good performance at small tip separations. We compared the traditional two-point task to an equally convenient alternative task in which participants attempt to discern the orientation (vertical or horizontal) of two points of contact. We used precision digital readout calipers to administer two-interval forced-choice versions of both tasks to 24 neurologically healthy adults, on the fingertip, finger base, palm, and forearm. We used Bayesian adaptive testing to estimate the participants’ psychometric functions on the two tasks. Traditional two-point performance remained significantly above chance levels even at zero point separation. In contrast, two-point orientation discrimination approached chance as point separation approached zero, as expected for a valid measure of tactile spatial acuity. Traditional two-point performance was so inflated at small point separations that 75%-correct thresholds could be determined on all tested sites for fewer than half of participants. The 95%-correct thresholds on the two tasks were similar, and correlated with receptive field spacing. In keeping with previous critiques, we conclude that the traditional two-point task provides an unintended non-spatial cue, resulting in spuriously good performance at small spatial separations. Unlike two-point discrimination, two-point orientation discrimination rigorously measures tactile spatial acuity. We recommend the use of two-point orientation discrimination for neurological assessment.