This paper describes our progress in near-range (within 0 to 2 meters) ego-centric docking using vision under variable lighting conditions (indoors, outdoor, dusk). The docking behavior is fully autonomous and reactive, where the robot directly responds to the ratio of the number of pixels of two colored fiducials without constructing an explicit model of the landmark. This is similar to visual homing in insects and has a low computational complexity of 0(n2) and a fast update rate. In order to accurately segment the colored fiducials under unconstrained lighting conditions, the spherical coordinate transform (SCT) color space is used, rather than RGB or HSV, in conjunction with an adaptive segmentation algorithm. Experiments with a "daughter" robot docking with a "mother" robot were collected. Results showed that 1) vision-based docking is faster than teleoperation yet equivalent in performance and 2) adaptive segmentation is more robust under challenging lighting conditions, including outdoors.