Requirements classification is a traditional application of machine learning (ML) to RE that helps handle large requirements datasets. A prime example of an RE classification problem is the distinction between functional and non-functional (quality) requirements. State-of-the-art classifiers build their effectiveness on a large set of word features like text n-grams or POS n-grams, which do not fully capture the essence of a requirement. As a result, it is arduous for human analysts to interpret the classification results by exploring the classifier's inner workings. We propose the use of more general linguistic features, such as dependency types, for the construction of interpretable ML classifiers for RE. Through a feature engineering effort, in which we are assisted by modern introspection tools that reveal the hidden inner workings of ML classifiers, we derive a set of 17 linguistic features. While classifiers that use our proposed features fit the training set slightly worse than those that use high-dimensional feature sets, our approach performs generally better on validation datasets and it is more interpretable.
To guarantee the overall intended objectives of a multiagent systems, the behavior of individual agents should be controlled and coordinated. Such coordination can be achieved, without limiting the agents' autonomy, via runtime norm enforcement. However, due to the dynamicity and uncertainty of the environment, the enforced norms can be ineffective. In this paper, we propose a runtime supervision mechanism that automatically revises norms when their enforcement appears to be ineffective. The decision to revise norms is taken based on a Bayesian Network that gives information about the likelihood of achieving the overall intended system objectives by enforcing the norms. Norms can be revised in three ways: relaxation, strengthening, and alteration. We evaluate the supervision mechanism on an urban smart traffic simulation.
To achieve system-level properties of a multiagent system, the behavior of individual agents should be controlled and coordinated. One way to control agents without limiting their autonomy is to enforce norms by means of sanctions. The dynamicity and unpredictability of the agents’ interactions in uncertain environments, however, make it hard for designers to specify norms that will guarantee the achievement of the system-level objectives in every operating context. In this paper, we propose a runtime mechanism for the automated revision of norms by altering their sanctions. We use a Bayesian Network to learn, from system execution data, the relationship between the obedience/violation of the norms and the achievement of the system-level objectives. By combining the knowledge acquired at runtime with an estimation of the preferences of rational agents, we devise heuristic strategies that automatically revise the sanctions of the enforced norms. We evaluate our heuristics using a traffic simulator and we show that our mechanism is able to quickly identify optimal revisions of the initially enforced norms.
Modelling social phenomena in large-scale agent-based simulations has long been a challenge due to the computational cost of incorporating agents whose behaviors are determined by reasoning about their internal attitudes and external factors. However, COVID-19 has brought the urgency of doing this to the fore, as, in the absence of viable pharmaceutical interventions, the progression of the pandemic has primarily been driven by behaviors and behavioral interventions. In this paper, we address this problem by developing a large-scale data-driven agent-based simulation model where individual agents reason about their beliefs, objectives, trust in government, and the norms imposed by the government. These internal and external attitudes are based on actual data concerning daily activities of individuals, their political orientation, and norms being enforced in the US state of Virginia. Our model is calibrated using mobility and COVID-19 case data. We show the utility of our model by quantifying the benefits of the various behavioral interventions through counterfactual runs of our calibrated simulation.
Agent-based simulation is increasingly being used to model social phenomena involving large numbers of agents. However, existing agent-based simulation platforms severely limit the kinds of the social phenomena that can modeled, as they do not support large scale simulations involving agents with complex behaviors. In this paper, we present a scalable agent-based simulation framework that supports modeling of complex social phenomena. The framework integrates a new simulation platform that exploits distributed computer architectures, with an extension of a multi-agent programming technology that allows development of complex deliberative agents. To show the scalability of our framework, we briefly describe its application to the development of a model of the spread of COVID-19 involving complex deliberative agents in the US state of Virginia.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.