Although computational linguistic methods—such as topic modelling, sentiment analysis and emotion detection—can provide social media researchers with insights into online public discourses, it is not inherent as to how these methods should be used, with a lack of transparent instructions on how to apply them in a critical way. There is a growing body of work focusing on the strengths and shortcomings of these methods. Through applying best practices for using these methods within the literature, we focus on setting expectations, presenting trajectories, examining with context and critically reflecting on the diachronic Twitter discourse of two case studies: the longitudinal discourse of the NHS Covid-19 digital contact-tracing app and the snapshot discourse of the Ofqual A Level grade calculation algorithm, both related to the UK. We identified difficulties in interpretation and potential application in all three of the approaches. Other shortcomings, such the detection of negation and sarcasm, were also found. We discuss the need for further transparency of these methods for diachronic social media researchers, including the potential for combining these approaches with qualitative ones—such as corpus linguistics and critical discourse analysis—in a more formal framework.
In August 2020, the UK government and regulation body Ofqual replaced school examinations with automatically computed A Level grades in England and Wales. This algorithm factored in school attainment in each subject over the previous three years. Government officials initially stated that the algorithm was used to combat grade inflation. After public outcry, teacher assessment grades used instead. Views concerning who was to blame for this scandal were expressed on the social media website Twitter. While previous work used NLP-based opinion mining computational linguistic tools to analyse this discourse, shortcomings included accuracy issues, difficulties in interpretation and limited conclusions on who authors blamed. Thus, we chose to complement this research by analysing 18,239 tweets relating to the A Level algorithm using Corpus Linguistics (CL) and Critical Discourse Analysis (CDA), underpinned by social actor representation. We examined how blame was attributed to different entities who were presented as social actors or having social agency. Through analysing transitivity in this discourse, we found the algorithm itself, the UK government and Ofqual were all implicated as potentially responsible as social actors through active agency, agency metaphor possession and instances of passive constructions. According to our results, students were found to have limited blame through the same analysis. We discuss how this builds upon existing research where the algorithm is implicated and how such a wide range of constructions obscure blame. Methodologically, we demonstrated that CL and CDA complement existing NLP-based computational linguistic tools in researching the 2020 A Level algorithm; however, there is further scope for how these approaches can be used in an iterative manner.
BACKGROUND Since September 2020, the NHS Covid-19 contact tracing app has been used to mitigate the spread of Covid-19 in the UK. Since its launch, this app has been part of the discussion regarding the perceived social agency of decision-making algorithms. On the social media website Twitter, a plethora of views about the app have been found but only analysed for sentiment and topic trajectories thus far, leaving the perceived social agency of the app underexplored. OBJECTIVE We aimed to examine the discussion of social agency in social media public discourse regarding algorithmic-operated decisions, particularly when the AI agency responsible for specific information systems is not openly disclosed in an example such as the Covid-19 contact tracing app. To do this, we analysed the presentation of the NHS Covid-19 App on Twitter, focusing on the portrayal of social agency and the impact of its deployment on society. We also aimed to discover what the presentation of social agents communicates about the perceived responsibility of the app. METHODS Using Corpus Linguistics and Critical Discourse Analysis, underpinned by Social Actor Representation, we used the link between grammatical and social agency and analysed a corpus of 118,316 tweets from September 2020 to July 2021 to see whether the app was portrayed as a social actor. RESULTS We found that active presentations of the app – seen mainly through personalisation and agency metaphor – dominated the discourse. The app was presented as a social actor in 96% of the cases considered and grew in proportion to passive presentations in time. These active presentations showed the app to be a social actor in five main ways: informing, instructing, providing permission, disrupting, and functioning. We found a small number of occasions where the app was presented passively, through backgrounding and exclusion. CONCLUSIONS We concluded that Twitter users presented the NHS Covid-19 App as an active social actor with a clear sense of social agency. The study also revealed that Twitter users perceived the app as responsible for their welfare, especially when it provided instructions or permission, and this perception remained consistent throughout the discourse, particularly during significant events. Overall, this study contributes to understanding how social agency is discussed in social media discourse related to algorithmic-operated decisions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.