Different news articles about the same topic often offer a variety of perspectives: an article written about gun violence might emphasize gun control, while another might promote 2nd Amendment rights, and yet a third might focus on mental health issues. In communication research, these different perspectives are known as "frames", which, when used in news media will influence the opinion of their readers in multiple ways. In this paper, we present a method for effectively detecting frames in news headlines. Our training and performance evaluation is based on a new dataset of news headlines related to the issue of gun violence in the United States. This Gun Violence Frame Corpus (GVFC) was curated and annotated by journalism and communication experts. Our proposed approach sets a new state-of-the-art performance for multiclass news frame detection, significantly outperforming a recent baseline by 35.9% absolute difference in accuracy. We apply our frame detection approach in a large scale study of 88k news headlines about the coverage of gun violence in the U.S. between 2016 and 2018.
Crowdcoding, a method that outsources “coding” tasks to numerous people on the internet, has emerged as a popular approach for annotating texts and visuals. However, the performance of this approach for analyzing social media data in the context of journalism and mass communication research has not been systematically assessed. This study evaluated the validity and efficiency of crowdcoding based on the analysis of 4,000 tweets about the 2016 U.S. presidential election. The results show that compared with the traditional quantitative content analysis, crowdcoding yielded comparably valid results and was superior in efficiency, but was more expensive under most circumstances.
People's comfort with and acceptability of artificial intelligence (AI) instantiations is a topic that has received little systematic study. This is surprising given the topic's relevance to the design, deployment and even regulation of AI systems. To help fill in our knowledge base, we conducted mixed-methods analysis based on a survey of a representative sample of the US population (N = 2254). Results show that there are two distinct social dimensions to comfort with AI: as a peer and as a superior. For both dimensions, general and technological efficacy traits-locus of control, communication apprehension, robot phobia, and perceived technology competence-are strongly associated with acceptance of AI in various roles. Female and older respondents also were less comfortable with the idea of AI agents in various roles. A qualitative analysis of comments collected from respondents complemented our statistical approach. We conclude by exploring the implications of our research for AI acceptability in society.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.