The cochlear implant (CI) is a neuroprosthesis that allows profoundly deaf patients to recover speech intelligibility. This recovery goes through long-term adaptative processes to build coherent percepts from the coarse information delivered by the implant. Here we analyzed the longitudinal postimplantation evolution of word recognition in a large sample of CI users in unisensory (visual or auditory) and bisensory (visuoauditory) conditions. We found that, despite considerable recovery of auditory performance during the first year postimplantation, CI patients maintain a much higher level of word recognition in speechreading conditions compared with normally hearing subjects, even several years after implantation. Consequently, we show that CI users present higher visuoauditory performance when compared with normally hearing subjects with similar auditory stimuli. This better performance is not only due to greater speechreading performance, but, most importantly, also due to a greater capacity to integrate visual input with the distorted speech signal. Our results suggest that these behavioral changes in CI users might be mediated by a reorganization of the cortical network involved in speech recognition that favors a more specific involvement of visual areas. Furthermore, they provide crucial indications to guide the rehabilitation of CI patients by using visually oriented therapeutic strategies.cochlear implant ͉ deafness ͉ multisensory integration ͉ speech comprehension D espite the apparent division between sensory modalities from the receptors to high cortical levels, we can simultaneously integrate visual and auditory signals resulting in qualitative percepts distinct from those derived from a single unisensory stimulus (1, 2). Furthermore, in cases of precise temporal or spatial congruency between the bisensory stimuli, multisensory integration is expressed at the behavioral level by perceptual improvements by reducing ambiguity (3, 4) and at the neuronal level by enhancing neuronal activity (5). Multisensory integration is also essential for speech recognition, which is based on the simultaneous integration of visual information derived from lip movements and auditory cues produced by the talker (6). The McGurk effect, in which a mismatch between the visual and auditory speech signals is artificially introduced, reveals that the visual information derived from lip movements can strongly influence our auditory perception (7). Although we might not be aware of the relevance of the visual cues for normal speech recognition, the influence of vision becomes convincingly apparent when the auditory information is embedded in noise. In degraded auditory conditions, the visuoauditory presentation leads to higher performance of recognition, when compared with the auditory alone stimulation (8, 9), in a mechanism that mimics an improvement in the acoustic signal-to-noise ratio (SNR) (10).In normally hearing (NH) subjects, although speechreading performance is very low, the association during development between the...