Our two ears do not function as fixed and independent sound receptors; their functioning is coupled and dynamically adjusted via the contralateral medial olivocochlear efferent reflex (MOCR). The MOCR possibly facilitates speech recognition in noisy environments. Such a role, however, is yet to be demonstrated because selective deactivation of the reflex during natural acoustic listening has not been possible for human subjects up until now. Here, we propose that this and other roles of the MOCR may be elucidated using the unique stimulus controls provided by cochlear implants (CIs). Pairs of sound processors were constructed to mimic or not mimic the effects of the contralateral MOCR with CIs. For the non-mimicking condition (STD strategy), the two processors in a pair functioned independently of each other. When configured to mimic the effects of the MOCR (MOC strategy), however, the two processors communicated with each other and the amount of compression in a given frequency channel of each processor in the pair decreased 106 E. A. Lopez-Poveda et al. with increases in the output energy from the contralateral processor. The analysis of output signals from the STD and MOC strategies suggests that in natural binaural listening, the MOCR possibly causes a small reduction of audibility but enhances frequency-specific inter-aural level differences and the segregation of spatially nonoverlapping sound sources. The proposed MOC strategy could improve the performance of CI and hearing-aid users.