Smart speakers are becoming increasingly ubiquitous in society and are now used for satisfying a variety of information needs, from asking about the weather or traffic to accessing the latest breaking news information. Their growing use for news and information consumption presents new questions related to the quality, source diversity, and comprehensiveness of the news-related information they convey. These questions have significant implications for voice assistant technologies acting as algorithmic information intermediaries, but systematic information quality audits have not yet been undertaken. To address this gap, we develop a methodological approach for evaluating information quality in voice assistants for news-related queries. We demonstrate the approach on the Amazon Alexa voice assistant, first characterising Alexa's performance in terms of response relevance, accuracy, and timeliness, and then further elaborating analyses of information quality based on query phrasing, news category, and information provenance. We discuss the implications of our findings for future audits of information quality on voice assistants and for the consumption of news information via such algorithmic intermediaries more broadly.
Peace processes are complex, protracted, and contentious involving significant bargaining and compromising among various societal and political stakeholders. In civil war terminations, it is pertinent to measure the pulse of the nation to ensure that the peace process is responsive to citizens' concerns. Social media yields tremendous power as a tool for dialogue, debate, organization, and mobilization, thereby adding more complexity to the peace process. Using Colombia's final peace agreement and national referendum as a case study, we investigate the influence of two important indicators: intergroup polarization and public sentiment toward the peace process. We present a detailed linguistic analysis to detect intergroup polarization and a predictive model that leverages Tweet structure, content, and user-based features to predict public sentiment toward the Colombian peace process. We demonstrate that had proaccord stakeholders leveraged public opinion from social media, the outcome of the Colombian referendum could have been different.
Retracted research discussed on social media can spread misinformation, yet we lack an understanding of how retracted articles are mentioned by academic and non-academic users. This is especially relevant on Twitter due to the platform's prominent role in science communication. Here, we analyze the pre and post retraction differences in Twitter engagement metrics and content of mentions for over 3,800 retracted English-language articles alongside comparable non-retracted articles. We subset these findings according to the five user types detected by our supervised learning classifier: members of the public, scientists, bots, practitioners, and science communicators. We find that retracted articles receive greater overall engagement than non-retracted articles, especially among members of the public and bot users, the majority of engagement happening prior to retraction. Our results highlight non-scientists' involvement in retracted article discussions and suggest an opportunity for Twitter to include a retraction notice feature.
The growing popularity of online fundraising (aka "crowdfunding") has attracted significant research on the subject. In contrast to previous studies that attempt to predict the success of crowdfunded projects based on specific characteristics of the projects and their creators, we present a more general approach that focuses on crowd dynamics and is robust to the particularities of different crowdfunding platforms. We rely on a multi-method analysis to investigate the correlates, predictive importance, and quasi-causal effects of features that describe crowd dynamics in determining the success of crowdfunded projects. By applying a multi-method analysis to a study of fundraising in three different online markets, we uncover universal crowd dynamics that ultimately decide which projects will succeed. In all analyses and across three markets, we consistently find that funders' behavioural signals (1) are significantly correlated with fundraising success; (2) approximate fundraising outcomes better than the characteristics of projects and their creators such as credit grade, company valuation, and subject domain; and (3) have significant quasi-causal effects on fundraising outcomes while controlling for potentially confounding project variables. By showing that universal features deduced from crowd behaviour are predictive of fundraising success on different crowdfunding platforms, our work provides design-relevant insights about novel types of collective decision-making online. This research inspires thus potential ways to leverage cues from the crowd and catalyses research into crowd-aware system design.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.