Television viewers’ attention is increasingly more often divided between television and “second screens”, for example when viewing television broadcasts and following their related social media discussion on a tablet computer. The attentional costs of such multitasking may vary depending on the ebb and flow of the social media channel, such as its emotional contents. In the present study, we tested the hypothesis that negative social media messages would draw more attention than similar positive messages. Specifically, news broadcasts were presented in isolation and with simultaneous positive or negative Twitter messages on a tablet to 38 participants in a controlled experiment. Recognition memory, gaze tracking, cardiac responses, and self-reports were used as attentional indices. The presence of any tweets on the tablet decreased attention to the news broadcasts. As expected, negative tweets drew longer viewing times and elicited more attention to themselves than positive tweets. Negative tweets did not, however, decrease attention to the news broadcasts. Taken together, the present results demonstrate a negativity bias exists for social media messages in media multitasking; however, this effect does not amplify the overall detrimental effects of media multitasking.
Hate speech has been identified as a pressing problem in society and several automated approaches have been designed to detect and prevent it. This paper reports and reflects upon an action research setting consisting of multi-organizational collaboration conducted during Finnish municipal elections in 2017, wherein a technical infrastructure was designed to automatically monitor candidates' social media updates for hate speech. The setting allowed us to engage in a 2-fold investigation. First, the collaboration offered a unique view for exploring how hate speech emerges as a technical problem. The project developed an adequately well-working algorithmic solution using supervised machine learning. We tested the performance of various feature extraction and machine learning methods and ended up using a combination of Bag-of-Words feature extraction with Support-Vector Machines. However, an automated approach required heavy simplification, such as using rudimentary scales for classifying hate speech and a reliance on word-based approaches, while in reality hate speech is a linguistic and social phenomenon with various tones and forms. Second, the action-research-oriented setting allowed us to observe affective responses, such as the hopes, dreams, and fears related to machine learning technology. Based on participatory observations, project artifacts and documents, interviews with project participants, and online reactions to the detection project, we identified participants' aspirations for effective automation as well as the level of neutrality and objectivity introduced by an algorithmic system. However, the participants expressed more critical views toward the system after the monitoring process. Our findings highlight how the powerful expectations related to technology can easily end up dominating a project dealing with a contested, topical social issue. We conclude by discussing the problematic aspects of datafying hate and suggesting some practical implications for hate speech recognition.
Visual object categorization is one of the most active research topics in computer vision, and Caltech-101 data set is one of the standard benchmarks for evaluating the method performance. Despite of its wide use, the data set has certain weaknesses: i) the objects are practically in a standard pose and scale in the middle of the images and ii) background varies too little in certain categories making it more discriminative than the foreground objects. In this work, we demonstrate how these weaknesses bias the evaluation results in an undesired manner. In addition, we reduce the bias effect by replacing the backgrounds with random landscape images from Google and by applying random Euclidean transformations to the foreground objects. We demonstrate how the proposed randomization process makes visual object categorization more challenging improving the relative results of methods which categorize objects by their visual appearance and are invariant to pose changes. The new data set is made publicly available for other researchers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.