“…Work in humanoid robotics might decompose this task as the following steps: (i) identify a potential target in peripheral vision based on a rapid analysis of superficial salient features (colour, shape, movement); (ii) orient to and fixate on the object using foveal vision to form an internal 3-dimensional model of the object and of its key properties (shape, size, texture, and so forth); (iii) in parallel, form a second set of representations of the position and orientation of the object in space relative to those of the body, arm, and hand; (iv) match the first, Òwhat?Ó, model with a variety of stored ÒtemplatesÓ in order to determine whether this particular item is, indeed, a suitable target for reaching; (iv) apply algorithms to the computed Òwhere?Ó representations of the object and body, and make use of knowledge of the kinematic and dynamic properties of the arm, hand, and digits, to determine appropriate movement trajectories; (v) execute the planned movements largely ballistically but using some sensory feedback in the final approach, to locate, enclose, and lift the object in an effective way. Now consider the capacity of an animal such as the Etruscan shrew, the smallest living terrestrial mammalÑand known to be a remarkably efficient predatorÑto localise, identify, and entrap an agile prey insect using only its whiskers (Brecht, Naumann et al, 2011). The problem is similar in many ways to that of human (or humanoid) sensory-guided reaching.…”