From QAnon conspiracy theories to Russian government sponsored election interference, social media disinformation campaigns are a part of online life, and identifying these threats amid the posts that billions of social media users upload each day is a challenge. To help sort through massive amounts of data, social media platforms are developing AI systems to automatically remove harmful content primarily through text-based analysis. But these techniques won't identify all the disinformation on social media. After all, much of what people post are photos, videos, audio recordings, and memes. Developing the entirely new AI systems necessary to detect such multimedia disinformation will be difficult.
We're developing an AI early warning system to monitor how manipulated content online such as altered photos in memes leads, in some cases, to violent conflict and societal instability. It can also potentially interfere with democratic elections. Look no further than the 2019 Indonesian election to learn how online disinformation can have an unfortunate impact on the real world. Our system may prove useful to journalists, peacekeepers, election monitors, and others who need to understand how manipulated content is spreading online during elections and in other contexts.
Amidst the threat of digital misinformation, we offer a pilot study regarding the efficacy of an online social media literacy campaign aimed at empowering individuals in Indonesia with skills to help them identify misinformation. We found that users who engaged with our online training materials and educational videos were more likely to identify misinformation than those in our control group (total N=1,000). Given the promising results of our preliminary study, we plan to expand efforts in this area, and build upon lessons learned from this pilot study.
Amidst the threat of digital misinformation, we offer a pilot study regarding the efficacy of an online social media literacy campaign aimed at empowering individuals in Indonesia with skills to help them identify misinformation. We found that users who engaged with our online training materials and educational videos were more likely to identify misinformation than those in our control group (total N =1000). Given the promising results of our preliminary study, we plan to expand efforts in this area, and build upon lessons learned from this pilot study.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.