In terms of scalability, cost and ease of deployment, the Peer-to-Peer (P2P) approach has emerged as a promising solution for video streaming applications. Its architecture enables end-hosts, called peers, to relay the video stream to each other. P2P systems are in fact networks of users who control peers. Thus, user behavior is crucial to the performance of these systems because it directly impacts the streaming flow. To understand user behavior, several measurement studies have been carried out over different video streaming systems. Each measurement analyzes a particular system focusing on specific metrics and presents insights. However, a single study based on a particular system and specific metrics is not sufficient to provide a complete model of user behavior considering all of its components and the impact of external factors on them. In this paper, we propose a comparison and a synthesis of these measurements. First of all, we review video streaming architectures, followed by a survey on the user behavior measurements in these architectures. Then, we gather insights revealed in these measurements and compare them for consensual and contrasting points. Finally, we extract components of user behavior, their external impacting factors and relationships among them. We also point out those aspects of user behavior which require further investigations.
Understanding of the causes and correlations for algorithmic decisions is currently one of the major challenges of computer science, addressed under an umbrella term “explainable AI (XAI)”. Being able to explain an AI-based system may help to make algorithmic decisions more satisfying and acceptable, to better control and update AI-based systems in case of failure, to build more accurate models, and to discover new knowledge directly or indirectly. On the legal side, the question whether the General Data Protection Regulation (GDPR) provides data subjects with the right to explanation in case of automated decision-making has equally been the subject of a heated doctrinal debate. While arguing that the right to explanation in the GDPR should be a result of interpretative analysis of several GDPR provisions jointly, the authors move this debate forward by discussing the technical and legal feasibility of the explanation of algorithmic decisions. Legal limits, in particular the secrecy of algorithms, as well as technical obstacles could potentially obstruct the practical implementation of this right. By adopting an interdisciplinary approach, the authors explore not only whether it is possible to translate the EU legal requirements for an explanation into the actual machine learning decision-making, but also whether those limitations can shape the way the legal right is used in practice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.