2014
DOI: 10.1002/asjc.982
|View full text |Cite
|
Sign up to set email alerts
|

Markov Chain Approach to Probabilistic Guidance for Swarms of Autonomous Agents

Abstract: Motivated by biological swarms occurring in nature, there is recent interest in developing swarms comprised completely of engineered agents. The main challenge for developing swarm guidance laws compared to earlier formation flying and multi‐vehicle coordination approaches is the sheer number of agents involved. While formation flying applications might involve up to 10 to 20 agents, swarms are desired to contain hundreds to many thousands of agents. In order to deal with the sheer size, the present paper make… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
26
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 42 publications
(26 citation statements)
references
References 48 publications
(95 reference statements)
0
26
0
Order By: Relevance
“…where the third line uses (1). Regarding (ii), when the graph is complete and P = 1 n π , the return time T ii follows the geometric distribution:…”
Section: Optimal Solution For Complete Graphs With Unitary Travel mentioning
confidence: 99%
See 1 more Smart Citation
“…where the third line uses (1). Regarding (ii), when the graph is complete and P = 1 n π , the return time T ii follows the geometric distribution:…”
Section: Optimal Solution For Complete Graphs With Unitary Travel mentioning
confidence: 99%
“…Stochastic surveillance strategies, which emphasize the unpredictability of the movement of the patroller, are desirable since they are capable of defending against intelligent intruders who aim to avoid detection/capture. One of the main approaches to the design of robotic stochastic surveillance strategies is to adopt Markov chains; e.g., see the early reference [14] and the more recent [1], [6], [9], [19]. Srivastava et al [24] justified the Markov chain-based stochastic surveillance strategy by showing that for the deterministic strategies, in addition to predictability, it is also hard to specify the visit frequency.…”
Section: Introductionmentioning
confidence: 99%
“…These two assumptions ensure that there exists some Markov matrix M for which lim k→∞ M k = v1 T , and that Algorithm 2 terminates as a result of Proposition III.6 from the existence of some ε > 0 such that g − Gv ≥ ε1. Furthermore if A a is symmetric, then an ergodic, reversible Markov chain can be explicitly constructed, e.g., with the Metropolis-Hastings algorithm, which would ensure that a feasible solution exists [12].…”
Section: A Synthesis Proceduresmentioning
confidence: 99%
“…If no reversible ergodic Markov matrix is found, then this problem can be modified by replacing (13e) with LMI (11) and P ≥ 0. Bilinearity can be avoided by fixing D, e.g., D = diag(v) −1 as suggested in [12], and testing various values of λ in the interval [0, 1] to find the minimum. This procedure requires solving a sequence of linear feasibility problems in the matrix variable M .…”
Section: A Synthesis Proceduresmentioning
confidence: 99%
“…The localization of each agent still remains to be a main assumption. Under similar conditions, one can find the manuscripts [1] and [8], which describe probabilistic swarm guidance algorithms. In [5], the authors present an approach to task allocation for a homogeneous swarm of robots.…”
mentioning
confidence: 99%