It is well-established that linguistic processing is primarily a left-hemisphere activity, while emotional prosody processing is lateralized to the right hemisphere. Does attention, directed at different regions of the talker's face, reflect this pattern of lateralization? We investigated visuospatial attention across a talker's face with a dual-task paradigm, using dot detection and language comprehension measures. A static image of a talker was shown while participants listened to speeches spoken in two prosodic formats, emotional or neutral. A single dot was superimposed on the speaker's face in one of 4 facial regions on half of the trials. Dot detection effects depended on emotion condition-in the neutral condition, discriminability was greater for the right-, than for the left-, side of the face image, and at the mouth, compared to the eye region. The opposite effects occurred in the emotional prosody condition. The results support a model wherein visuospatial attention used during language comprehension is directed by the left hemisphere given neutral emotional prosody, and by the right hemisphere given primarily negative emotional prosodic cues.Despite nearly 60 years of research on how human attention functions when processing electronic displays of text and graphic information or when understanding auditory speech, comparatively little is known about human attention as it applied to auditory-visual speech comprehension. The purpose of the present study is to investigate how a visuospatial attention mechanism that is used to process a talker's facial information is affected by the nature of prosodic information heard during a language comprehension task.In face-to-face communication, information arrives to our language understanding systems from the talker's mouth, in the form of auditory linguistic and prosodic information, and from the face, in the form of visual articulatory cues (visible speech), facial expressions, eye gazes, and head movements. This smorgasbord of information available to our mind and senses is modulated by an attention system that can be flexibly adapted to suit the circumstances of the particular listening and viewing situation. Further, the amount of influence of visible speech is dependent on contextual and perceiver characteristics (e.g., Jordan & Sergeant, 2000;Sekiyama & Tohkura, 1993). Several studies conducted in our laboratory have yielded adultage effects of visible speech influence during auditory-visual language processing. Compared to younger adults, older adults are usually found to be more reliant on visible speech (Thompson, 1995;Thompson & Malloy, 2004); although, the age effect reverses during extremely attention-demanding task situations, such as during a shadowing task, when younger