2011 IEEE Symposium on Security and Privacy 2011
DOI: 10.1109/sp.2011.20
|View full text |Cite
|
Sign up to set email alerts
|

Inference of Expressive Declassification Policies

Abstract: Abstract-We explore the inference of expressive humanreadable declassification policies as a step towards providing practical tools and techniques for strong language-based information security.Security-type systems can enforce expressive informationsecurity policies, but can require enormous programmer effort before any security benefit is realized. To reduce the burden on the programmer, we focus on inference of expressive yet intuitive information-security policies from programs with few programmer annotati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
19
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 27 publications
(19 citation statements)
references
References 32 publications
0
19
0
Order By: Relevance
“…Vaughan and Chong [57] use a data-flow analysis to infer expressive information security policies that describe what sensitive information may be revealed by a program. King et al [29], Pottier and Conchon [43], Smith and Thober [51], and the Jif compiler [40,41] all perform various forms of type inference for security-typed languages.…”
Section: Related Workmentioning
confidence: 99%
“…Vaughan and Chong [57] use a data-flow analysis to infer expressive information security policies that describe what sensitive information may be revealed by a program. King et al [29], Pottier and Conchon [43], Smith and Thober [51], and the Jif compiler [40,41] all perform various forms of type inference for security-typed languages.…”
Section: Related Workmentioning
confidence: 99%
“…In [25], leaks are inferred automatically and expressed in a human-readable security policy language helping programmers to decide whether the program is secure or not, however it can not give concrete counterexamples that could suggest further corrections. Counterexamples can be used not only to generate executable exploits as in our approach, but also to refine declassification policies quantifying the leakage [3,1].…”
Section: Related Workmentioning
confidence: 99%
“…Askarov and Sabelfeld [4] study a declassification framework specifying what and where data is released. Vaughan and Chong [46] infer declassification policies for Java programs.…”
Section: Related Workmentioning
confidence: 99%