Robotic sound localization has traditionally been restricted to either on-robot microphone arrays or embedded microphones in aware environments, each of which have limitations due to their static configurations. This work overcomes the static configuration problems by using visual localization to track multiple wireless microphones in the environment with enough accuracy to combine their auditory streams in a traditional localization algorithm. In this manner, microphones can move or be moved about the environment, and still be combined with existing on-robot microphones to extend array baselines and effective listening ranges without having to re-measure inter-microphone distances.