Data science teams working on a shared analysis face coordination problems such as dividing up the work to be done, monitoring performance and integrating the pieces. Research on distributed software development teams has raised the potential of stigmergic coordination, that is, coordination through a shared work product in place of explicit communication. The MIDST system was developed to support stigmergic coordination by making individual contributions to a shared work product visible, legible and combinable. In this paper, we present initial studies of a total of 40 student teams (24 using MIDST) that shows that teams that used MIDST did experience the intended system affordances to support their work, did seem to coordinate at least in part stigmergically and performed better on an assigned project.
Fake news can significantly misinform people who often rely on online sources and social media for their information. Current research on fake news detection has mostly focused on analyzing fake news content and how it propagates on a network of users. In this paper, we emphasize the detection of fake news by assessing its credibility. By analyzing public fake news data, we show that information on news sources (and authors) can be a strong indicator of credibility. Our findings suggest that an author's history of association with fake news, and the number of authors of a news article, can play a significant role in detecting fake news. Our approach can help improve traditional fake news detection methods, wherein content features are often used to detect fake news.
Information literacy encompasses a range of information evaluation skills for the purpose of making judgments. In the context of crowdsourcing, divergent evaluation criteria might introduce bias into collective judgments. Recent experiments have shown that crowd estimates can be swayed by social influence. This might be an unanticipated effect of media literacy training: encouraging readers to critically evaluate information falls short when their judgment criteria are unclear and vary among social groups. In this exploratory study, we investigate the criteria used by crowd workers in reasoning through a task. We crowdsourced evaluation of a variety of information sources, identifying multiple factors that may affect individual's judgment, as well as the accuracy of aggregated crowd estimates. Using a multi-method approach, we identified relationships between individual information assessment practices and analytical outcomes in crowds, and propose two analytic criteria, relevance and credibility, to optimize collective judgment in complex analytical tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.