“…We expect that because AI synthesized voices in current AI technology are still unable to effectively handle special cases of accented rhythm, alliteration, and Chinese Pronunciation Erhua in its contexts, human voices may have more syllable and accent processing shifts than AI-synthesized voices, thereby expressing more emotion and cueing and eliciting stronger cognitive-emotional feedback from listeners, as reflected in this study primarily in in brainwave activity. Given that there have been articles using psychophysiological methods to study AI newscast (Bucher and Schumacher, 2006;Kallinen and Ravaja, 2007;Seleznov et al, 2019;Heiselberg, 2021;Heiselberg et al, 2022), our first research hypothesis is that human-voiced news broadcasts induce greater EEG activity and cognitive activation in listeners than AI-synthesized voices, and consequently have greater cognitive communication effects (Hypothesis 1).…”