2022
DOI: 10.1088/2632-2153/aca7b0
|View full text |Cite
|
Sign up to set email alerts
|

Active particles using reinforcement learning to navigate in complex motility landscapes

Abstract: As the length scales of the smallest technology continue to advance beyond the micron scale it becomes increasingly important to equip robotic components with the means for intelligent and autonomous decision making with limited information. With the help of a tabular Q-learning algorithm, we design a model for training a microswimmer, to navigate quickly through an environment given by various different scalar motility fields, while receiving a limited amount of local information. We compare the performances of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
3
1

Relationship

1
9

Authors

Journals

citations
Cited by 16 publications
(10 citation statements)
references
References 44 publications
0
10
0
Order By: Relevance
“…To understand how evolution shaped navigation and search strategies, one can use reinforcement learning (RL) [25] and genetic algorithms [26,27] to identify optimal and alternative strategies. Recently it has been demonstrated how agents trained with RL (eventually combined with genetic algorithms) are able to find advantageous swimming strategies in several situations such as in viscous solutions [28][29][30], simple energy landscapes [31], steady flows [32][33][34], turbulent fluids [35][36][37][38], and complex motility landscapes [39]. Notwithstanding their merits, in all these studies, either the goal of the particle is different from reaching a specific target or, if a target region has to be met, its position is fixed and then implicitly learned during the learning process.…”
Section: Introductionmentioning
confidence: 99%
“…To understand how evolution shaped navigation and search strategies, one can use reinforcement learning (RL) [25] and genetic algorithms [26,27] to identify optimal and alternative strategies. Recently it has been demonstrated how agents trained with RL (eventually combined with genetic algorithms) are able to find advantageous swimming strategies in several situations such as in viscous solutions [28][29][30], simple energy landscapes [31], steady flows [32][33][34], turbulent fluids [35][36][37][38], and complex motility landscapes [39]. Notwithstanding their merits, in all these studies, either the goal of the particle is different from reaching a specific target or, if a target region has to be met, its position is fixed and then implicitly learned during the learning process.…”
Section: Introductionmentioning
confidence: 99%
“…Yet, electrolytes are but a special case of particles with long-range interactions (decaying as 1/r where r is the distance between particles), which include also one-component plasmas, active particles, and many others. [59][60][61][62]96,97 A recent investigation showed remarkable results where long-range correlations were observed both in driven electrolytes and active particle systems, 95,98 for the same underlying mathematical reason. This raises the question of whether the time-dependent behaviour uncovered in the present work extends to this broad class of systems and whether other universal signatures may be unravelled.…”
Section: Hyperuniformity In Timementioning
confidence: 99%
“…[43] demonstrated the artificial self-thermophoretic micro-swimmers can navigate under the influence of Brownian motion; Monderkamp et al [44] trained active Brownian particles through complex motility landscapes; Gazzola et al [45] and Verma et al [46] found optimal swimming strategies that minimize drag and energy consumption in the school of fish. The above five examples adopt the off-policy learning techniques, which means that each update stochastic samples the data collected at any point during training, namely,…”
Section: Iii2 Optimal Control Via the Reinforcement Learningmentioning
confidence: 99%