Current web tracking practices pose a constant threat to the privacy of Internet users. As a result, the research community has recently proposed different tools to combat wellknown tracking methods. However, the early detection of new, previously unseen tracking systems is still an open research problem. In this paper, we present TrackSign, a novel approach to discover new web tracking methods. The main idea behind TrackSign is the use of code fingerprinting to identify common pieces of code shared across multiple domains. To detect tracking fingerprints, TrackSign builds a novel 3-mode network graph that captures the relationship between fingerprints, resources and domains. We evaluated TrackSign with the top-100K most popular Internet domains, including almost 1M web resources from more than 5M HTTP requests. Our results show that our method can detect new web tracking resources with high precision (over 92%). TrackSign was able to detect 30K new trackers, more than 10K new tracking resources and 270K new tracking URLs, not yet detected by most popular blacklists. Finally, we also validate the effectiveness of TrackSign with more than 20 years of historical data from the Internet Archive.
Abstract-Detecting network traffic anomalies is crucial for network operators as it helps to identify security incidents and to monitor the availability of networked services. Although anomaly detection has received significant attention in the literature, the automatic classification of network anomalies still remains an open problem. In this paper, we introduce a novel scheme and build a system to detect and classify anomalies that is based on an elegant combination of frequent item-set mining with decision tree learning. Our approach has two key features: 1) effectiveness, it has a very low false-positive rate; and 2) simplicity, an operator can easily comprehend how our detector and classifier operates. We evaluate our scheme using traffic traces from two real networks, namely from the European-wide backbone network of GÉANT and from a regional peering link in Spain. In both cases, we achieve an overall classification accuracy greater than 98% and a false-positive rate of approximately only 1%. In addition, we show that it is possible to train our classifier with data from one network and use it to effectively classify anomalies in a different network. Finally, we have built a corresponding anomaly detection and classification system and have deployed it as part of an operational platform, where it is successfully used to monitor two 10Gb/s peering links between the Catalan and the Spanish national research and education networks (NREN).
With a rapidly increasing market of millions of devices, the intelligent virtual assistants (IVA) have become a new vector available to exploit security breaches. In this work we approach the third revision of the Amazon Echo ecosystem's device Alexa from a security perspective, focusing our efforts on the interaction between the user and the device. We found the client-server communications to be robust using encryption, but studying the voice message recognition system we discovered a method to execute voice commands remotely, a feature not available by default. This method could be used against the user if an attacker manages to perform a session hijacking attack on the web or mobile clients.
UPC), where he received the B.Sc. degree in Computer Science in 2008 and the M.Sc degree in Computer Architecture, Networks and Systems in 2010. He has several years of experience in network and system administration and currently holds a Projects Scholarship at UPC. His expertise and research interest are in computer networks, especially in the field of network monitoring, web tracking and anomaly detection.
The pervasiveness of online web tracking poses a constant threat to the privacy of Internet users. Millions of users currently employ content-blockers in their web browsers to block tracking resources in real time. Although content-blockers are based on blacklists, which are known to be difficult to maintain and easy to evade, the research community has not succeeded in replacing them with better alternatives yet. Most of the methods recently proposed in the literature obtain good detection accuracy, but at the expense of increasing their complexity and making them more difficult to maintain and configure by the end user. In this paper, we present a new web tracking detection method, called Deep Tracking Detector (DTD), that analyzes the properties of URL strings to detect tracking resources, without using any other external features. Consequently, DTD can easily be implemented in a browser plugin and operate in real time. Our experimental results, with more than 5M HTTP requests from 100K websites, show that DTD achieves a detection accuracy higher than 97% by looking only at the URL of the resources.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.