2018 Annual American Control Conference (ACC) 2018
DOI: 10.23919/acc.2018.8431870
|View full text |Cite
|
Sign up to set email alerts
|

Integrating a PCA Learning Algorithm with the SUSD Strategy for a Collective Source Seeking Behavior

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 15 publications
(10 citation statements)
references
References 12 publications
0
10
0
Order By: Relevance
“…which is to maintain a desired distance d ij only along the q i direction [1]. As shown in the right of Fig.…”
Section: Simulation and Experimental Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…which is to maintain a desired distance d ij only along the q i direction [1]. As shown in the right of Fig.…”
Section: Simulation and Experimental Resultsmentioning
confidence: 99%
“…Additionally, V1 → −∞ as θ → 2. This along with the fact that V 1 → ∞ whenever θ → 2 and ∇z c > µ 1 > 0, implies that D 1 is a forward invariant set, and thus θ ∈ [0, 2) for all t. For the forced system f (t, θ, δ), we obtain (35) where 1 . Therefore, using Definition 3.3 of local input-to-state stability in [15], and according to Theorem 4.19 in [23], the origin of the forced system f (t, θ, δ) is locally input-to-state stable.…”
Section: Proof Consider the Domain Dmentioning
confidence: 92%
See 2 more Smart Citations
“…Tian et al [10] proposed an improved Vicsek model with limited field of view, and this model was further extended by Zhang et al [11] with random line-of-sight directions. Among the extensive studies on flocking control problems, most of them used traditional methods such as those based on LQR [12], PCA [13] or a virtual leader [14], which are not effective in dealing with the external disturbance and the nonlinear time-varying nature of the flocking control problem. This paper uses the deep reinforcement learning (DRL) method to complete the flocking task without requiring accurate modeling and sophisticated control design that are required in traditional methods.…”
Section: Introductionmentioning
confidence: 99%