IntroductionIn the context of generative AI intervention in news production, this study primarily focuses on the impact of AI-generated content (AIGC) labeling cues on users’ perceptions of automated news based on nudge theory.MethodsA 2 (authorship disclosure nudge cues: with vs. without AIGC label) × 2 (automated news type: descriptive vs. evaluative news) within-subject experiment was carried out. Thirty-two participants were recruited to read automated news, evaluate the perceived content trustworthiness, and record with an EEG device.ResultsThe results demonstrated that disclosure of AIGC labeling significantly reduced the trustworthiness perception of both fact-based descriptive and opinion-based evaluative news. In EEG, the delta PSD, theta PSD, alpha PSD, and beta PSD with disclosure of AIGC labeling were significantly higher than those without AIGC labeling. Meanwhile, in descriptive news conditions, TAR with AIGC labeling was higher than without AIGC labeling.DiscussionThese results suggested that AIGC labeling significantly improves the degree of attention concentration in reading and deepens the degree of cognitive processing. Users are nudged by AIGC labeling to shift their limited attention and cognitive resources to re-evaluate the information quality to obtain more prudent judgment results. This helps to supplement the theoretical perspective on transparent disclosure nudging in the Internet content governance research field, and it can offer practical guidance to use content labeling to regulate the media industry landscape in the face of AI’s pervasive presence.