Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Artificial intelligence (AI) has the potential to disrupt the advertising industry as marketers and brands can leverage its power to create highly engaging personalized content. However, the usage of AI is prone to bias and misinformation and can be used to manipulate. Therefore, various lawmakers such as the European Union aim to enforce AI disclosure messages to protect consumers. But the implications of such disclosures have not yet been studied. This paper draws on existing theories in persuasion knowledge, disclosure theory, inferences of manipulative intent, and AI aversion to develop a model to understand consumer attitudes toward AI disclosures in Instagram advertisements. A three-condition between-subjects online experiment ( Nfinal = 161) was conducted to test the model. The data were analyzed using a moderated mediation model. AI disclosures lead to a direct decrease in advertising attitude. In addition, AI disclosures lead to a decrease in brand attitude only when consumers have high AI aversion. There were no effects of AI disclosures on source credibility. These effects were mediated by inferences of manipulative intent. However, participants who viewed the AI disclosure had lower inferences of manipulative intent and then participants who did not view the AI disclosure. Furthermore, no differences were found between AI disclosures pertaining to the use of AI in the creation of the image or the text. Implications are discussed from both theoretical and managerial viewpoints and highlight why the use of AI on social media for advertising purposes should be limited as it will become more transparent in the future.
Artificial intelligence (AI) has the potential to disrupt the advertising industry as marketers and brands can leverage its power to create highly engaging personalized content. However, the usage of AI is prone to bias and misinformation and can be used to manipulate. Therefore, various lawmakers such as the European Union aim to enforce AI disclosure messages to protect consumers. But the implications of such disclosures have not yet been studied. This paper draws on existing theories in persuasion knowledge, disclosure theory, inferences of manipulative intent, and AI aversion to develop a model to understand consumer attitudes toward AI disclosures in Instagram advertisements. A three-condition between-subjects online experiment ( Nfinal = 161) was conducted to test the model. The data were analyzed using a moderated mediation model. AI disclosures lead to a direct decrease in advertising attitude. In addition, AI disclosures lead to a decrease in brand attitude only when consumers have high AI aversion. There were no effects of AI disclosures on source credibility. These effects were mediated by inferences of manipulative intent. However, participants who viewed the AI disclosure had lower inferences of manipulative intent and then participants who did not view the AI disclosure. Furthermore, no differences were found between AI disclosures pertaining to the use of AI in the creation of the image or the text. Implications are discussed from both theoretical and managerial viewpoints and highlight why the use of AI on social media for advertising purposes should be limited as it will become more transparent in the future.
Background The proliferation of generative artificial intelligence (AI), such as ChatGPT, has added complexity and richness to the virtual environment by increasing the presence of AI-generated content (AIGC). Although social media platforms such as TikTok have begun labeling AIGC to facilitate the ability for users to distinguish it from human-generated content, little research has been performed to examine the effect of these AIGC labels. Objective This study investigated the impact of AIGC labels on perceived accuracy, message credibility, and sharing intention for misinformation through a web-based experimental design, aiming to refine the strategic application of AIGC labels. Methods The study conducted a 2×2×2 mixed experimental design, using the AIGC labels (presence vs absence) as the between-subjects factor and information type (accurate vs inaccurate) and content category (for-profit vs not-for-profit) as within-subjects factors. Participants, recruited via the Credamo platform, were randomly assigned to either an experimental group (with labels) or a control group (without labels). Each participant evaluated 4 sets of content, providing feedback on perceived accuracy, message credibility, and sharing intention for misinformation. Statistical analyses were performed using SPSS version 29 and included repeated-measures ANOVA and simple effects analysis, with significance set at P<.05. Results As of April 2024, this study recruited a total of 957 participants, and after screening, 400 participants each were allocated to the experimental and control groups. The main effects of AIGC labels were not significant for perceived accuracy, message credibility, or sharing intention. However, the main effects of information type were significant for all 3 dependent variables (P<.001), as were the effects of content category (P<.001). There were significant differences in interaction effects among the 3 variables. For perceived accuracy, the interaction between information type and content category was significant (P=.005). For message credibility, the interaction between information type and content category was significant (P<.001). Regarding sharing intention, both the interaction between information type and content category (P<.001) and the interaction between information type and AIGC labels (P=.008) were significant. Conclusions This study found that AIGC labels minimally affect perceived accuracy, message credibility, or sharing intention but help distinguish AIGC from human-generated content. The labels do not negatively impact users’ perceptions of platform content, indicating their potential for fact-checking and governance. However, AIGC labeling applications should vary by information type; they can slightly enhance sharing intention and perceived accuracy for misinformation. This highlights the need for more nuanced strategies for AIGC labels, necessitating further research.
BACKGROUND The proliferation of generative artificial intelligence (AI), such as ChatGPT, has added complexity and richness to the virtual environment by increasing the presence of AI-generated content (AIGC). Although social media platforms such as TikTok have begun labeling AIGC to facilitate the ability for users to distinguish it from human-generated content, little research has been performed to examine the effect of these AIGC labels. OBJECTIVE This study investigated the impact of AIGC labels on perceived accuracy, message credibility, and sharing intention for misinformation through a web-based experimental design, aiming to refine the strategic application of AIGC labels. METHODS The study conducted a 2×2×2 mixed experimental design, using the AIGC labels (presence vs absence) as the between-subjects factor and information type (accurate vs inaccurate) and content category (for-profit vs not-for-profit) as within-subjects factors. Participants, recruited via the Credamo platform, were randomly assigned to either an experimental group (with labels) or a control group (without labels). Each participant evaluated 4 sets of content, providing feedback on perceived accuracy, message credibility, and sharing intention for misinformation. Statistical analyses were performed using SPSS version 29 and included repeated-measures ANOVA and simple effects analysis, with significance set at <i>P</i><.05. RESULTS As of April 2024, this study recruited a total of 957 participants, and after screening, 400 participants each were allocated to the experimental and control groups. The main effects of AIGC labels were not significant for perceived accuracy, message credibility, or sharing intention. However, the main effects of information type were significant for all 3 dependent variables (<i>P</i><.001), as were the effects of content category (<i>P</i><.001). There were significant differences in interaction effects among the 3 variables. For perceived accuracy, the interaction between information type and content category was significant (<i>P</i>=.005). For message credibility, the interaction between information type and content category was significant (<i>P</i><.001). Regarding sharing intention, both the interaction between information type and content category (<i>P</i><.001) and the interaction between information type and AIGC labels (<i>P</i>=.008) were significant. CONCLUSIONS This study found that AIGC labels minimally affect perceived accuracy, message credibility, or sharing intention but help distinguish AIGC from human-generated content. The labels do not negatively impact users’ perceptions of platform content, indicating their potential for fact-checking and governance. However, AIGC labeling applications should vary by information type; they can slightly enhance sharing intention and perceived accuracy for misinformation. This highlights the need for more nuanced strategies for AIGC labels, necessitating further research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.