2018
DOI: 10.1073/pnas.1800923115
|View full text |Cite
|
Sign up to set email alerts
|

Efficient collective swimming by harnessing vortices through deep reinforcement learning

Abstract: Fish in schooling formations navigate complex flow fields replete with mechanical energy in the vortex wakes of their companions. Their schooling behavior has been associated with evolutionary advantages including energy savings, yet the underlying physical mechanisms remain unknown. We show that fish can improve their sustained propulsive efficiency by placing themselves in appropriate locations in the wake of other swimmers and intercepting judiciously their shed vortices. This swimming strategy leads to col… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

5
224
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 357 publications
(229 citation statements)
references
References 43 publications
5
224
0
Order By: Relevance
“…It is important to emphasize that RL requires significant computational resources due to the large numbers of episodes required to properly account for the interaction of the agent and the environment. This cost may be trivial for games but it may be prohibitive in experiments and flow simulations, a situation that is rapidly changing (Verma et al 2018).…”
Section: Semi-supervised Learningmentioning
confidence: 99%
“…It is important to emphasize that RL requires significant computational resources due to the large numbers of episodes required to properly account for the interaction of the agent and the environment. This cost may be trivial for games but it may be prohibitive in experiments and flow simulations, a situation that is rapidly changing (Verma et al 2018).…”
Section: Semi-supervised Learningmentioning
confidence: 99%
“…309 However, one should be aware that our conclusion is based on the tethered motion 310 (fixed CoM) and absence of kinematic adjustment. A recent study by Verma et al [27] 311 shows that, when learning-based optimized kinematic adjustment is present, wake 312 capture can be advantegeous. Therefore, the comparison between the present study and 313 study of Verma et al [27] demonstrates that there exists a distinction between wake 314 capturing and wake energy harvesting: successful wake capture requires skills in sensing 315 and adjustment, and if the fish (or an artificial swimmer) lacks those skills, wake 316 capture may become energetically unfavorable.…”
mentioning
confidence: 99%
“…A recent study by Verma et al [27] 311 shows that, when learning-based optimized kinematic adjustment is present, wake 312 capture can be advantegeous. Therefore, the comparison between the present study and 313 study of Verma et al [27] demonstrates that there exists a distinction between wake 314 capturing and wake energy harvesting: successful wake capture requires skills in sensing 315 and adjustment, and if the fish (or an artificial swimmer) lacks those skills, wake 316 capture may become energetically unfavorable. Besides the active mechanism, passive 317 mechanisms based on appropriate body flexibility and mass distribution are also 318 potential factors that may influence fish performance in school [25].…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…Regardless of the optimized critic NN, the actor NN is also optimized according to (17). The optimality of the control is guaranteed by the Bellman principle under the hypothesis that the state-action space is known.…”
Section: Deep Deterministic Policy Gradient As An Actor-critic Algorimentioning
confidence: 99%