No abstract
Information-flow security policies are an appealing way of specifying confidentiality and integrity policies in information systems. Most previous work on language-based security has assumed that programs run in a closed, managed environment and that they use potentially unsafe constructs, such as declassification, to interface to external communication channels, perhaps after encrypting data to preserve its confidentiality. This situation is unsatisfactory for systems that need to communicate over untrusted channels or use untrusted persistent storage, since the connection between the cryptographic mechanisms used in the untrusted environment and the abstract security labels used in the trusted language environment is ad hoc and unclear.This paper addresses this problem in three ways: First, it presents a simple, security-typed language with a novel mechanism called packages that provides an abstract means for creating opaque objects and associating them with security labels; well-typed programs in this language enforce noninterference. Second, it shows how to implement these packages using public-key cryptography. This implementation strategy uses a variant of Myers and Liskov's decentralized label model, which supports a rich label structure in which mutually distrusting data owners can specify independent confidentiality and integrity requirements. Third, it demonstrates that this implementation of packages is sound with respect to Dolev-Yao style attackers-such an attacker cannot determine the contents of a package without possessing the appropriate keys, as determined by the security label on the package. University of Pennsylvania AbstractInformation-flow security policies are an appealing way of specifying confidentiality and integrity policies in information systems. Most previous work on language-based security has assumed that programs run in a closed, managed environment and that they use potentially unsafe constructs, such as declassification, to interface to external communication channels, perhaps after encrypting data to preserve its confidentiality. This situation is unsatisfactory for systems that need to communicate over untrusted channels or use untrusted persistent storage, since the connection between the cryptographic mechanisms used in the untrusted environment and the abstract security labels used in the trusted language environment is ad hoc and unclear.This paper addresses this problem in three ways: First, it presents a simple, security-typed language with a novel mechanism called packages that provides an abstract means for creating opaque objects and associating them with security labels; well-typed programs in this language enforce noninterference. Second, it shows how to implement these packages using public-key cryptography. This implementation strategy uses a variant of Myers and Liskov's decentralized label model, which supports a rich label structure in which mutually distrusting data owners can specify independent confidentiality and integrity requirements. Third, it...
Abstract-We explore the inference of expressive humanreadable declassification policies as a step towards providing practical tools and techniques for strong language-based information security.Security-type systems can enforce expressive informationsecurity policies, but can require enormous programmer effort before any security benefit is realized. To reduce the burden on the programmer, we focus on inference of expressive yet intuitive information-security policies from programs with few programmer annotations.We define a novel security policy language that can express what information a program may release, under what conditions (or, when) such release may occur, and which procedures are involved with the release (or, where in the code the release occur). We describe a dataflow analysis for precisely inferring these policies, and build a tool that instantiates this analysis for the Java programming language. We validate the policies, analysis, and our implementation by applying the tool to a collection of simple Java programs.
Google's Android platform includes a permission model that protects access to sensitive capabilities, such as Internet access, GPS use, and telephony. While permissions provide an important level of security, for many applications they allow broader access than actually required. In this paper, we introduce a novel framework that addresses this issue by adding finer-grained permissions to Android. Underlying our framework is a taxonomy of four major groups of Android permissions, each of which admits some common strategies for deriving sub-permissions. We used these strategies to investigate fine-grained versions of five of the most common Android permissions, including access to the Internet, user contacts, and system settings. We then developed a suite of tools that allow these fine-grained permissions to be inferred on existing apps; to be enforced by developers on their own apps; and to be retrofitted by users on existing apps. We evaluated our tools on a set of top apps from Google Play, and found that fine-grained permissions are applicable to a wide variety of apps and that they can be retrofitted to increase security of existing apps without affecting functionality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.