Private browsing has been a popular privacy feature built into all mainstream browsers since 2005. However, despite the prevalent use, the security of this feature has received little attention from the research community. To the best of our knowledge, no study has existed that systematically evaluates the security of private browsing across all major browsers and from all angles: not only examining the memory, but also the underlying database structure on the disk and the web traffic. In this paper, we present an up-to-date and comprehensive analysis of private browsing across the four most popular web browsers: IE, Firefox, Chrome and Safari. We report that all browsers under study suffer from a variety of vulnerabilities, many of which have not been reported or known before. The problems are generally caused by the following factors: a lax control of permission to allow extensions to run in the private mode with unrestricted privilege; inconsistent implementations of the underlying SQLite database between the private and usual modes; the neglect of the cross-mode interference when the two modes are run in parallel; a lack of attention to side-channel timing attacks, etc. All of the attacks have been experimentally verified with countermeasures proposed.
SUMMARYReduction of power consumption for any computer system is now an important issue, although this should be done in a manner that is not detrimental to the users of that computer system. We present a number of policies that can be applied to multi-use clusters where computers are shared between interactive users and high-throughput computing. We evaluate policies by trace-driven simulations in order to determine the effects on power consumed by the high-throughput workload and impact on high-throughput users. We further evaluate these policies for higher workloads by synthetically generating workloads based around the profiled workload observed through our system. We demonstrate that these policies could save~45% of the currently used energy for our high-throughput jobs over our current cluster policies without affecting the high-throughput users experience.
(2014) 'Reduction of wasted energy in a volunteer computing system through Reinforcement Learning.', Sustainable computing : informatics and systems., 4 (4). pp. 262-275. Further information on publisher's website:http://dx.doi.org/10.1016/j.suscom.2014.08.014Publisher's copyright statement: NOTICE: this is the author's version of a work that was accepted for publication in Sustainable Computing: Informatics and Systems. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reected in this document. Changes may have been made to this work since it was submitted for publication. A denitive version was subsequently published in Sustainable Computing: Informatics and Systems, 4, December 2014, 10.1016/j.suscom.2014.08.014. Additional information:Use policyThe full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-prot purposes provided that:• a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders.Please consult the full DRO policy for further details. AbstractVolunteer computing systems provide an easy mechanism for users who wish to perform large amounts of High Throughput Computing work. However, if the Volunteer Computing system is deployed over a shared set of computers where interactive users can seize back control of the computers this can lead to wasted computational effort and hence wasted energy. Determining on which resource to deploy a particular piece of work, or even to choose not to deploy the work at the current time, is a difficult problem to solve, depending both on the expected free time available on the computers within the Volunteer computing system and the expected runtime of the work -both of which are difficult to determine a-priori. We develop here a Reinforcement Learning approach tosolving this problem and demonstrate that it can provide a reduction in energy consumption between 30% and 53% depending on whether we can tolerate an increase in the overheads incurred.
In order to reliably generate electricity to meet the demands of the customer base, it is essential to match supply with demand. Short-term load forecasting is utilised in both real-time scheduling of electricity, and load-frequency control. This paper aims to improve the accuracy of load-forecasting by using machine learning techniques to predict 30 minutes ahead using smart meter data. We utilised the k-means clustering algorithm to cluster similar individual consumers and fit distinct models per cluster. Public holidays were taken into consideration for changing customer behaviour, as was periodicity of the day, week and year. We evaluated a number of approaches for predicting future energy demands including; Random Forests, Neural Networks, Long Short-Term Memory Neural Networks and Support Vector Regression models. We found that Random Forests performed best at each clustering level, and that clustering similar consumers and aggregating their predictions outperformed a single model in each case. These findings suggest that clustering smart meter data prior to forecasting is an important step in improving accuracy when using machine learning techniques.
any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Additional information:Use policyThe full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-prot purposes provided that:• a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders.Please consult the full DRO policy for further details. Abstract-High Throughput Computing (HTC) is a powerful paradigm allowing vast quantities of independent work to be performed simultaneously. However, until recently little evaluation has been performed on the energy impact of HTC. Many organisations now seek to minimise energy consumption across their IT infrastructure though it is unclear how this will affect the usability of HTC systems. We present here HTC-Sim, a simulation system which allows the evaluation of different energy reduction policies across an HTC system comprising a collection of computational resources dedicated to HTC work and resources provided through cycle scavenging -a Desktop Grid. We demonstrate that our simulation software scales linearly with increasing HTC workload.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.