Online algorithms have received much blame for polarizing emotions during the 2016 U.S. Presidential election. We use transfer entropy to measure directed information flows from human emotions to YouTube's video recommendation engine, and back, from recommended videos to users' emotions. We find that algorithmic recommendations communicate a statistically significant amount of positive and negative affect to humans. Joy is prevalent in emotional polarization, while sadness and fear play significant roles in emotional convergence. These findings can help to design more socially responsible algorithms by starting to focus on the emotional content of algorithmic recommendations. Employing a computational-experimental mixed method approach, the study serves as a demonstration of how the mathematical theory of communication can be used both to quantify human-machine communication, and to test hypotheses in the social sciences. Communicating with algorithms. 2 Communicating with algorithms: A transfer entropy analysis of emotions-based escapes from online echo chambers Algorithms intermediate almost all of the roughly 3.5 hours per day that each American communicates online (Center for the Digital Future, 2016). Algorithms decide how fast information is transferred and what information is presented (Hannak et al., 2013; Lazer, 2015). The proactive role of algorithms in communication processes can be conceptualized as an active dialogue between users and algorithms. Users communicate preferences (advertently or not), which are interpreted by algorithms (e.g., by recommender engines or bots), which then send responses back to users, who receive, interpret, and react to the algorithmic reply. In this study, we quantify the flow of information in the conceptual communication channel between users and socially responsive algorithms. Particularly, we first demonstrate that human emotions (joy, fear, sadness, etc.) are communicated to the selection mechanism of videos recommended by YouTube (via an individuals' choice of search terms), and second that the video's emotions influence how the viewer feels after being exposed to the videos. We study emotions related to Presidential candidates from the 2016 U.S. election. Rather than making assumptions about how recommender systems work, we systematically manipulate YouTube's real-world recommender system to create stimuli in a mixed-method computational-experimental approach. Algorithmic Filter Bubbles and Echo Chambers In the two-step communication between algorithms and humans it is common (but by no means obligatory) that the algorithm takes on the role of a confirmatory communication partner that reassures and often reinforces received information through positive feedback. This leads to the notorious "filter bubbles" (Pariser, 2011). The separation of users from contradictory information also gathers likeminded people in similar communication spaces, which then creates reinforcing echo chambers (
While traditional computer-mediated communication happened through transparent, passive, and neutral channels, today's communication channels are obscure, proactive, and distorted. Social algorithms, guided by a socio-technological codependency, often bias communication, usually in pursuit of some third-party goal of commercial or political nature. We propose a method to derive several summary measures to tests for transformational accuracy when transforming input into output. Since dynamical flexibility of social algorithms prevents anticipating their behavior, we study these black boxes as if we study human behavior, through controlled experiments. We conceptualize them as noisy communication channels and evaluate their throughput with the same information theoretic measures engineers had originally used to minimize communicative distortion (i.e. mutual information). We use repeated experiments to reverse-engineer algorithmic behavior and test for its statistical significance. We apply the method to three artificial intelligence algorithms: a neural net from IBM's Watson, and to the recommender engines of YouTube and Twitter.
Abstract:The increasingly ubiquitous nature of e-learning has meant that adult learners need support and guidance both in the use of resources; programme and institution obligations; but also in terms of socialisation particularly regarding student obligations and expectations. These issues have been addressed through the development of a website available to all prospective students through to those that register and continue on a programme of study. This website has been available to current students for one academic year. The website has had almost two thousand visitors in the academic year since its inception and has had extensive attention from other schools in the University and from external clients, which is indicative of its potential to fulfil an emergent but essential function in adult learning for postgraduate distance learning students. The development and utility of the website is accounted for in this paper.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.