YouTube’s “up next” feature algorithmically selects, suggests, and displays videos to watch after the one that is currently playing. This feature has been criticized for limiting users’ exposure to a range of diverse media content and information sources; meanwhile, YouTube has reported that they have implemented various technical and policy changes to address these concerns. However, there is little publicly available data to support either the existing concerns or YouTube’s claims of having addressed them. Drawing on the idea of “platform observability,” this article combines computational and qualitative methods to investigate the types of content that the algorithms underpinning YouTube’s “up next” feature amplify over time, using three keyword search terms associated with sociocultural issues where concerns have been raised about YouTube’s role: “coronavirus,” “feminism,” and “beauty.” Over six weeks, we collected the videos (and their metadata, including channel IDs) that were highly ranked in the search results for each keyword, as well as the highly ranked recommendations associated with the videos. We repeated this exercise for three steps in the recommendation chain and then examined patterns in the recommended videos (and the channels that uploaded the videos) for each query and their variation over time. We found evidence of YouTube’s stated efforts to boost “authoritative” media outlets, but at the same time, misleading and controversial content continues to be recommended. We also found that while algorithmic recommendations offer diversity in videos over time, there are clear “winners” at the channel level that are given a visibility boost in YouTube’s “up next” feature. However, these impacts are attenuated differently depending on the nature of the issue.
This paper makes a case for addressing humour as an online safety issue so that social media platforms can include it in their risk assessments and harm mitigation strategies. We take the ' online safety' regulation debate, especially as it is taking place in the UK and the European Union, as an opportunity to reconsider how and when humour targeted at historically marginalised groups can cause harm. Drawing on sociolegal literature, we argue that in their online safety efforts, platforms should address lawful humour targeted at historically marginalised groups because it can cause individual harm via its cumulative effects and contribute to broader social harms. We also demonstrate how principles and concepts from critical humour studies and Feminist Standpoint Theory can help platforms assess the differential impacts of humour.Issue 1
YouTube’s ‘up next’ feature algorithmically suggests videos to watch after a video that is currently playing. This feature has been criticised for limiting users’ exposure to diverse media content and information sources; meanwhile, YouTube has reported that they have implemented technical and policy changes to address these concerns. Yet, there is limited data to support either the existing concerns or YouTube’s claims. Drawing on the concept of platform observability, this paper combines computational and qualitative methods to investigate the types of content YouTube’s ‘up next’ feature amplifies over time, using three search terms associated with sociocultural issues where concerns have been raised about YouTube’s role: ‘coronavirus’, ‘feminism’ and ‘beauty’. Over six weeks, we collected the videos (and their metadata) that were highly ranked in the search results for each keyword, as well as the top-ranked recommendations associated with each video, repeating the exercise for three steps in the recommendation chain. We then examined patterns in the recommended videos (and channels) for each query and their variation over time. We found evidence of YouTube's stated efforts to boost ‘authoritative’ media outlets, but at the same time, misleading and controversial content continues to be recommended. We also found that while algorithmic recommendations offer diversity in videos over time, there are clear ‘winners’ at the channel level that are given a visibility boost in YouTube’s 'up next' feature. These impacts were attenuated differently depending on the nature of the search topic.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.