The recent availability of affordable see-through wearable displays has fostered the development of several new interfaces and applications. Some of them take the augmented reality path, by seeking the blending of physical objects with overlaid 3D models or textual information. Some, on the other hand, are much simpler and follow a rather basic paradigm where the spatial integration between real world and data overlay is dropped. This is the case, for instance, with most applications based on Google Glass hardware, where textual data and images partially share the field of view of the user, but are not pinpointed to physical features. This is a rather important difference, since it marks the shift from a cooperative see-through mode, that characterizes proper augmented reality, to a competitive overlay, where the user attention is actually contended between real objects and displayed data. To this end, the user focus must continuously shift from one context to the other, possibly leading to both reduced productivity and usage strain. With this paper we are addressing exactly this issue. Specifically, we are assessing the role of different properties of the overlay, including the level of occlusion, the depth of the data layer, the position of the view frustum and the impact of stereo vision. Such study has been implemented by mean of a real-world evaluation which has been performed using a general purpose see-through device in a practical application scenario.