Hands have evolved as specialised effectors capable of both fine-tuned and gross motor actions. Thus, the location and functional capabilities of hands are important to defining which visual objects are action-relevant from the multitude of visual information in our environment. Visuospatial attention plays a critical role in the processing of such inputs. The aim of the present thesis was to investigate how internal representation of the hands and the actions we aim to complete with them, impacts visuospatial attention near the body.In study 1, I investigated how visuospatial attention contributes to luminance contrast sensitivity and object dimension judgements near hands. Targets were presented either briefly (43ms) or for a duration sufficient to facilitate shifts in covert visuospatial attention prior to target offset (250ms). Observers detected onset of visual objects of varying luminance contrasts (Experiment 1) and discriminated the dimension in which rectangles of varying aspect ratios were largest: width or height (Experiment 2) with hands adjacent to or distant from the display. In Experiment 1, for low-contrast stimuli, there was greater accuracy when detecting targets presented for 250ms versus 43ms. The opposite was true for high contrast stimuli: there was greater accuracy when detecting targets presented for 43ms versus 250ms and hand proximity did not modulate either of these effects. For Experiment 2, 250ms target presentations resulted in reductions of the vertical bias in aspect ratio judgements and improvements in visual sensitivity when hands were adjacent versus distant from the monitor. Visual sensitivity for the hand-adjacent posture was also greater for 250ms compared with 43ms target durations indicating enhanced object dimension precision for near-hand objects following shifts in visuospatial attention.In study 2, I examined how internal representation of the hands (handedness and grasping affordances) influences the distribution of visuospatial attention in peripersonal space. Left and right handed participants completed a covert visual cueing task, responding with either their dominant or non-dominant hand (Experiment 1), with the nonresponse hand adjacent to one of two target placeholders (and the other responding) either aligned with the shoulder (Experiment 2) or crossed over the body midline in the opposite region of hemispace (Experiment 3). In blocked trials targets appeared near the grasping (palmar) or non-grasping (back-of-hand) region of the hand. Experiment 1 found no evidence for visuospatial biases associated with handedness or response hand laterality. In Experiment 2, right-handers showed a larger attentional cueing cost for objects near the grasping surface versus non-grasping surface of their dominant hand suggesting that visuospatial attention is engaged more rapidly and disengaged more 3 slowly to objects near the graspable (versus non-graspable) space. Moreover, only hand proximity biases remained when hands were crossed over the body midline (Experiment ...