We present a novel control architecture for the integration of visually guided walking and whole-body reaching in a humanoid robot. We propose to use robot gaze as a common reference frame for both locomotion and reaching, as suggested by behavioral neuroscience studies in humans. A gaze controller allows the robot to track and fixate a target object, and motor information related to gaze control is then used to i) estimate the reachability of the target, ii) steer locomotion, iii) control whole-body reaching. The reachability is a measure of how well the object can be reached for, depending on the position and posture of the robot with respect to the target, and it is obtained from the gaze motor information using a mapping that has been learned autonomously by the robot through motor experience: we call this mapping Reachable Space Map. In our approach, both locomotion and whole-body movements are seen as ways to maximize the reachability of a visually detected object, thus i) expanding the robot workspace to the entire visible space and ii) exploiting the robot redundancy to optimize reaching. We implement our method on a full 48-DOF humanoid robot and provide experimental results in the real world.