As virtualization technology gains in popularity, so do attempts to compromise the security and integrity of virtualized computing resources. Anti-virus software and firewall programs are typically deployed in the guest virtual machine to detect malicious software. These security measures are effective in detecting known malware, but do little to protect against new variants of intrusions. Intrusion detection systems (IDSs) can be used to detect malicious behavior. Most intrusion detection systems for virtual execution environments track behavior at the application or operating system level, using virtualization as a means to isolate themselves from a compromised virtual machine.
In this paper, we present a novel approach to intrusion detection of virtual server environments which utilizes only information available from the perspective of the virtual machine monitor (VMM). Such an IDS can harness the ability of the VMM to isolate and manage several virtual machines (VMs), making it possible to provide monitoring of intrusions at a common level across VMs. It also offers unique advantages over recent advances in intrusion detection for virtual machine environments. By working purely at the VMM-level, the IDS does not depend on structures or abstractions visible to the OS (e.g., file systems), which are susceptible to attacks and can be modified by malware to contain corrupted information (e.g., the Windows registry). In addition, being situated within the VMM provides ease of deployment as the IDS is not tied to a specific OS and can be deployed transparently below different operating systems.
Due to the semantic gap between the information available to the VMM and the actual application behavior, we employ the power of data mining techniques to extract useful nuggets of knowledge from the raw, low-level architectural data. We show in this paper that by working entirely at the VMM-level, we are able to capture enough information to characterize normal executions and identify the presence of abnormal malicious behavior. Our experiments on over 300 real-world malware and exploits illustrate that there is sufficient information embedded within the VMM-level data to allow accurate detection of malicious attacks, with an acceptable false alarm rate.
The Local Outlier Factor (LOF) is a very powerful anomaly detection method available in machine learning and classification. The algorithm defines the notion of local outlier in which the degree to which an object is outlying is dependent on the density of its local neighborhood, and each object can be assigned an LOF which represents the likelihood of that object being an outlier. Although this concept of a local outlier is a useful one, the computation of LOF values for every data object requires a large number of k-nearest neighbor queries -this overhead can limit the use of LOF due to the computational overhead involved.Due to the growing popularity of Graphics Processing Units (GPU) in general-purpose computing domains, and equipped with a high-level programming language designed specifically for general-purpose applications (e.g., CUDA), we look to apply this parallel computing approach to accelerate LOF. In this paper we explore how to utilize a CUDA-based GPU implementation of the k-nearest neighbor algorithm to accelerate LOF classification. We achieve more than a 100X speedup over a multi-threaded dual-core CPU implementation. We also consider the impact of input data set size, the neighborhood size (i.e., the value of k) and the feature space dimension, and report on their impact on execution time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.