Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
This paper analyses how YouTube authenticates engagement metrics and, more specifically, how the platform corrects view counts by removing “fake views” (i.e., views considered artificial or illegitimate by the platform). Working with one and a half years of data extracted from a thousand French YouTube channels, we show the massive extent of the corrections done by YouTube, which concern the large majority of the channels and over 78% of the videos in our corpus. Our analysis shows that corrections are not done continuously as videos collect new views, but instead occur in batches, generally around 5 p.m. every day. More significantly, most corrections occur relatively late in the life of the videos, after they have reached most of their audience, and the delay in correction is not independent of the final popularity of videos: videos corrected later in their life are more popular on average than those corrected earlier. We discuss the probable causes of this phenomenon and its possible negative consequences on content diffusion. By inflating view counts, fake views could make videos appear more popular than they are and unwarrantedly encourage their recommendation, thus potentially altering the public debate on the platform. This could have implications on the spread of online misinformation, but their in-depth exploration requires first-hand information on view corrections, which YouTube does not provide through its API. This paper presents a series of experimental techniques to work around this limitation, offering a practical contribution to the study of online attention cycles (as described in the “Data and methods” section). At the same time, this paper is also a call for greater transparency by YouTube and other online platforms about information with crucial implications for the quality of online debate.
This paper analyses how YouTube authenticates engagement metrics and, more specifically, how the platform corrects view counts by removing “fake views” (i.e., views considered artificial or illegitimate by the platform). Working with one and a half years of data extracted from a thousand French YouTube channels, we show the massive extent of the corrections done by YouTube, which concern the large majority of the channels and over 78% of the videos in our corpus. Our analysis shows that corrections are not done continuously as videos collect new views, but instead occur in batches, generally around 5 p.m. every day. More significantly, most corrections occur relatively late in the life of the videos, after they have reached most of their audience, and the delay in correction is not independent of the final popularity of videos: videos corrected later in their life are more popular on average than those corrected earlier. We discuss the probable causes of this phenomenon and its possible negative consequences on content diffusion. By inflating view counts, fake views could make videos appear more popular than they are and unwarrantedly encourage their recommendation, thus potentially altering the public debate on the platform. This could have implications on the spread of online misinformation, but their in-depth exploration requires first-hand information on view corrections, which YouTube does not provide through its API. This paper presents a series of experimental techniques to work around this limitation, offering a practical contribution to the study of online attention cycles (as described in the “Data and methods” section). At the same time, this paper is also a call for greater transparency by YouTube and other online platforms about information with crucial implications for the quality of online debate.
PurposePaper studies the intervention of FI and TD on the sharing intention on social media (SM) users with different motivations.Design/methodology/approachThe mechanism for different motivations of SM users to influence sharing intention is explored using WarpPLS. The proposed model applies TAM in Hedonic Motivation System context and includes an alternate pathway of flow state.FindingsReciprocal relationship between FI and TD is empirically proven. Insights from the “Motivated Sharing Model for Social Media”(MSMSM) follow that users who use SM for information get immersed, however intention to share is not triggered by it.Practical implicationsThis study emphasizes on the compatibility of content characteristics with the gratifications of the motivations for SM use to achieve virality. Practitioners may use MSMSM to optimize content, so it appeals to the target audience and has a higher probability of being shared.Originality/valueSocial media users carry different motivations and choose to share select content on the overloaded platform. However, the mechanism for different motivations to drive sharing on SM has remained unexplored. Literature highlights flow as the driver of sharing, whilst the findings on the relationship between flow state and sharing intention on SM are inconclusive; some estimate a positively significant relationship, while others find it to be partially or selectively significant. In this study, intervention of the two dimensions of induced flow: namely, focused immersion (FI) and temporal dissociation (TD)—on the sharing intention on SM is examined.
PurposeSocial media (SM) platforms tempt individuals to communicate their perspectives in real-time, rousing engaging discussions on countless topics. People, besides using these platforms to put up their problems and solutions, also share activist content (AC). This study aims to understand why people participate in activist AC sharing on SM by investigating factors related to planned and unplanned human behaviour.Design/methodology/approachThe study adopted a quantitative approach and administered a close-ended structured questionnaire to gather data from 431 respondents who shared AC on Facebook. The data was analysed using hierarchical regression in SPSS.FindingsThe study found a significant influence of both planned (perceived social gains (PSGs) , altruism and perceived knowledge (PK)) and unplanned (extraversion and impulsiveness) human behaviour on activist content-sharing behaviour on SM. The moderating effect of enculturation and general public opinion (GPO) was also examined.Practical implicationsSharing AC on SM is not like sharing other forms of content such as holiday recommendations – the former can provoke consequences (sometimes undesirable) in some regions. Such content can easily leverage the firehose of deception, maximising the vulnerability of those involved. This work, by relating human behaviour to AC sharing on SM, offers significant insights to enable individuals to manage their shared content and waning probable consequences.Originality/valueThis work combined two opposite constructs of human behaviour: planned and unplanned to explain individual behaviour in a specific context of AC sharing on SM.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.