Most conventional robots rely on controlling the location of the center of pressure to maintain balance, relying mainly on foot pressure sensors for information. By contrast,humans rely on sensory data from multiple sources, including proprioceptive, visual, and vestibular sources. Several models have been developed to explain how humans reconcile information from disparate sources to form a stable sense of balance. These models may be useful for developing robots that are able to maintain dynamic balance more readily using multiple sensory sources. Since these information sources may conflict, reliance by the nervous system on any one channel can lead to ambiguity in the system state. In humans, experiments that create conflicts between different sensory channels by moving the visual field or the support surface indicate that sensory information is adaptively reweighted. Unreliable information is rapidly down-weighted,then gradually up-weighted when it becomes valid again.Human balance can also be studied by building robots that model features of human bodies and testing them under similar experimental conditions. We implement a sensory reweighting model based on an adaptive Kalman filter in abipedal robot, and subject it to sensory tests similar to those used on human subjects. Unlike other implementations of sensory reweighting in robots, our implementation includes vision, by using optic flow to calculate forward rotation using a camera (visual modality), as well as a three-axis gyro to represent the vestibular system (non-visual modality), and foot pressure sensors (proprioceptive modality). Our model estimates measurement noise in real time, which is then used to recompute the Kalman gain on each iteration, improving the ability of the robot to dynamically balance. We observe that we can duplicate many important features of postural sw ay in humans, including automatic sensory reweighting,effects, constant phase with respect to amplitude, and a temporal asymmetry in the reweighting gains.