Risk analysis is an essential methodology for cybersecurity as it allows organizations to deal with cyber threats potentially affecting them, prioritize the defense of their assets, and decide what security controls should be implemented. Many risk analysis methods are present in cybersecurity models, compliance frameworks, and international standards. However, most of them employ risk matrices, which suffer shortcomings that may lead to suboptimal resource allocations. We propose a comprehensive framework for cybersecurity risk analysis, covering the presence of both intentional and nonintentional threats and the use of insurance as part of the security portfolio. A simplified case study illustrates the proposed framework, serving as template for more complex problems.
We consider the problem of providing valid inference for a selected parameter in a sparse regression setting. It is well known that classical regression tools can be unreliable in this context due to the bias generated in the selection step. Many approaches have been proposed in recent years to ensure inferential validity. Here, we consider a simple alternative to data splitting based on randomising the response vector, which allows for higher selection and inferential power than the former and is applicable with an arbitrary selection rule. We provide a theoretical and empirical comparison of both methods and extend the randomisation approach to non-normal settings. Our investigations show that the gain in power can be substantial.
SUMMARY
We consider the problem of providing valid inference for a selected parameter in a sparse regression setting. It is well known that classical regression tools can be unreliable in this context due to the bias generated in the selection step. Many approaches have been proposed in recent years to ensure inferential validity. Here, we consider a simple alternative to data splitting based on randomizing the response vector, which allows for higher selection and inferential power than the former and is applicable with an arbitrary selection rule. We provide a theoretical and empirical comparison of both methods and derive a central limit theorem for the randomization approach. Our investigations show that the gain in power can be substantial.
We review the empirical Bayes approach to large-scale inference. In the context of the problem of inference for a high-dimensional normal mean, empirical Bayes methods are advocated as they exhibit risk-reducing shrinkage, while establishing appropriate control of frequentist properties of the inference. We elucidate these frequentist properties and evaluate the protection that empirical Bayes provides against selection bias.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.