Multi-criteria assessments are increasingly being employed in the prioritisation of health threats, supporting decision processes related to health risk management. The use of multi-criteria analysis in this context is welcome, as it facilitates the consideration of multiple impacts of health threats, it can encompass the use of expert judgment to complement and amalgamate the evidence available, and it permits the modelling of policy makers' priorities. However, these assessments often lack a clear multi-criteria conceptual framework, in terms of both axiomatic rigour and adequate procedures for preference modelling. Such assessments are ad hoc from a multi-criteria decision analysis perspective, despite the strong health expertise used in constructing these models. In this paper we critically examine some key assumptions and modelling choices made in these assessments, comparing them with the best practices of multi-attribute value analysis. Furthermore, we suggest a set of guidelines on how simulation studies might be employed to assess the impact of these modelling choices. We apply these guidelines to two relevant studies available in the health threat prioritisation domain. We identify severe variability in our simulations due to poor modelling choices, which could cause changes in the ranking of threats being assessed and thus lead to alternative policy recommendations than those suggested in their reports. Our results confirm the importance of carefully designing multi-criteria evaluation models for the prioritisation of health threats.
In recent years, Attribute-Based Access Control (ABAC) has become quite popular and effective for enforcing access control in dynamic and collaborative environments. Implementation of ABAC requires the creation of a set of attribute-based rules which cumulatively form a policy. Designing an ABAC policy ab initio demands a substantial amount of effort from the system administrator. Moreover, organizational changes may necessitate the inclusion of new rules in an already deployed policy. In such a case, re-mining the entire ABAC policy will require a considerable amount of time and administrative effort. Instead, it is better to incrementally augment the policy. Keeping these aspects of reducing administrative overhead in mind, in this paper, we propose PAMMELA, a Policy Administration Methodology using Machine Learning to help system administrators in creating new ABAC policies as well as augmenting existing ones. PAMMELA can generate a new policy for an organization by learning the rules of a policy currently enforced in a similar organization. For policy augmentation, PAMMELA can infer new rules based on the knowledge gathered from the existing rules. Experimental results show that our proposed approach provides a reasonably good performance in terms of the various machine learning evaluation metrics as well as execution time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.