2008 Second IEEE International Conference on Self-Adaptive and Self-Organizing Systems 2008
DOI: 10.1109/saso.2008.46
|View full text |Cite
|
Sign up to set email alerts
|

Leveraging Organizational Guidance Policies with Learning to Self-Tune Multiagent Systems

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2009
2009
2015
2015

Publication Types

Select...
2
2
1

Relationship

3
2

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 21 publications
0
7
0
Order By: Relevance
“…Figure 6c (note line on the x-axis) shows that without the generated policies, our system produced much lower quality products than with the generated policies. We have also run similar experiments with other multiagent system design models such as those described in [7] and obtained similar results.…”
Section: Discussionmentioning
confidence: 55%
See 1 more Smart Citation
“…Figure 6c (note line on the x-axis) shows that without the generated policies, our system produced much lower quality products than with the generated policies. We have also run similar experiments with other multiagent system design models such as those described in [7] and obtained similar results.…”
Section: Discussionmentioning
confidence: 55%
“…Some work has been done on model checking multiagent systems [10], [13]. Automated policy generation has been used for online learning [7]. These methods help the multiagent system better tune to the environment in which it is deployed.…”
Section: Introductionmentioning
confidence: 99%
“…To demonstrate the application of the P-graph framework for assessing the design of OMACS-based multiagent systems, a survey is given of a simplified Cooperative Robotic Search Team 5 (CRST) system [20], [33]. Essentially, we are to design a team of robots whose goal is to search for different areas of a given location on a map.…”
Section: Motivational Example: Cooperative Robotic Search Team Systemmentioning
confidence: 99%
“…Unlike any of the available algorithmic methods for computing the quality of a proposed set of assignments based upon OMACS, i.e., agents, a i  A OMACS , assigned to play roles, r k  R OMACS , in order to achieve goals, g j  G OMACS , where no mathematical programming model is derived due to the approach adopted, i.e., step-by-step computation [4], [20], [30], [33], [37], [38]; we propose a simple mathematical programming model, which is derived from the maximal structure, generated by algorithm MSG, and does not impair the optimality of the resultant solution.…”
Section: Mathematical Programming Modelmentioning
confidence: 99%
“…Policies have been used in multi-agent system engineering for some time and several languages, frameworks, enforcement and checking mechanisms have been developed (Bradshaw et. al, 2003;Shoham and Tennenholtz, 1995;Harmon et al, 2007Harmon et al, , 2008. In general, policies are used to restrict agent behaviour and may be enforced at design time or at runtime.…”
Section: Model Policiesmentioning
confidence: 99%