Twitter has been widely adopted into journalistic workflows, as it provides instant and widespread access to a plethora of content about breaking news events, while also serving to disseminate reporting on those events. The content on Twitter, however, poses several challenges for journalists, as it arrives unfiltered, full of noise, and at an alarming velocity. Building on the results of the first national survey of social media use in Irish newsrooms, this paper investigates the adoption of social media into journalistic workflows, journalists' attitudes towards various aspects of social media, and the content and perspectives generated by these online communities. It particularly investigates how Twitter shapes the processes of sourcing and verification in newsrooms, and assesses how notions of trust factor into the adoption of the Twitter platform and content into these processes. The paper further analyses relationships between journalist profile and adopted practices and attitudes, and seeks to understand how Twitter operates in the current journalistic landscape. While this paper draws its data from a survey of journalists in Ireland, the analysis of the relationship between trust, sourcing, and verification reveals broader patterns about journalistic values, and how these values and practices may operate in the field of journalism as a whole. ARTICLE HISTORY
Data storytelling is rapidly gaining prominence as a characteristic activity of digital journalism with significant adoption by small and large media houses. While a handful of previous studies have examined what characterises aspects of data storytelling like narratives and visualisation or analysis based on single cases, we are yet to see a systematic effort to harness these available resources to gain better insight into what characterises good data stories and how these are created. This study analysed 44 cases of outstanding data storytelling practices comprising winning entries of the Global Editors Network's Data Journalism Award from 2013 to 2016 to bridge this knowledge gap. Based on a conceptual model we developed, we uniformly characterised each of the 44 cases and then proceeded to determine types of these stories and the nature of technologies employed in creating them. Our findings refine the traditional typology of data stories from the journalistic perspective and also identify core technologies and tools that appear central to good data journalism practice. We also discuss our findings in relations to the recently published 2017 winning entries. Our results have significant implications for the required competencies for data journalists in contemporary and future newsrooms.
This paper explores data journalism education, with a particular focus on formal training in the higher education sector globally. The study draws on data from: (1) The 2017 Global Data Journalism Survey, to study the state of data journalism education and the requirements in terms of training and (2) A dataset of 219 unique modules or programmes on data journalism or related fields that were curated and examined in order to understand the nature of data journalism education in universities across the world. The results show that while journalists interested in data are highly educated in journalism or closely related fields, they do not have a strong level of education in the more technical areas of data journalism, such as data analysis, coding and data visualisation. The study further reveals that a high proportion of data journalism courses are concentrated in the US, with a growing number of courses developing across the world, and particularly in Europe. Despite this, education in the field does not have a strong academic underpinning, and while many courses are emerging in this area, there are not enough academically trained instructors to lead and/or teach such interdisciplinary programmes in the higher education sector.
News organisations have longstanding practices for archiving and preserving their content. The emerging practice of data journalism has led to the creation of complex new outputs, including dynamic data visualisations that rely on distributed digital infrastructures. Traditional news archiving does not yet have systems in place for preserving these outputs, which means that we risk losing this crucial part of reporting and news history. Following a systematic approach to studying the literature in this area, this paper provides a set of recommendations to address lacunae in the literature. This paper contributes to the field by (1) providing a systematic study of the literature in the fields, (2) providing a set of recommendations for the adoption of long-term preservation of dynamic data visualisations as part of the news publication workflow, and (3) identifying concrete actions that data journalists can take immediately to ensure that these visualisations are not lost.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.