Adding features to the surface of a part creates opportunities to serialize the part with an identifier, and/or to provide enhanced measurements of the surface geometry, provided that the features can be detected. Scale-space filtering is a common tool used with both pixel and voxel representations of objects for such purposes. One challenge associated with extending this class of successful algorithms to detect features on images of 3D surfaces in 2D imagery is that the filtering is performed in the plane of the camera sensor, which may not be conveniently related to the orientation of the target object's surface. If the object is represented with 3D voxel data, creating a similar effect can also require a similar relationship between the plane of the local surface and the orientations of the voxels relative to the target object. Thus, a meshbased approach is desirable. Furthermore, whereas many feature detection schemes target scale-invariant features, a desired outcome is the ability to localize features intentionally created with deformations at a given scale, that is, scale-specific artifacts. This paper proposes a technique for adding features of a known size to a 3D mesh representing an object, then adopting the ability to compute per-scale differences to the local mesh surface geometry to match a known feature of a known scale. We introduce a tunable two-scale depth measurement scheme to quantify the displacement of a vertex from the local surface, which can be a strong indicator of features. We print and scan 3D models with fiducial features appearing across the surface to demonstrate the fidelity and accuracy of the proposed feature detection scheme.