2020 IEEE International Workshop on Metrology for Agriculture and Forestry (MetroAgriFor) 2020
DOI: 10.1109/metroagrifor50201.2020.9277630
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcement Learning for Connected Autonomous Vehicle Localization via UAVs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
17
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 18 publications
(17 citation statements)
references
References 14 publications
0
17
0
Order By: Relevance
“…We modify the environment in [50] to a decentralized RL setting where š‘ agents (UAVs) aim to work together to reach a specific target. Each agent can choose a set of four actions {north, south, west, east} as shown in Fig.…”
Section: B Decentralized Rlmentioning
confidence: 99%
See 1 more Smart Citation
“…We modify the environment in [50] to a decentralized RL setting where š‘ agents (UAVs) aim to work together to reach a specific target. Each agent can choose a set of four actions {north, south, west, east} as shown in Fig.…”
Section: B Decentralized Rlmentioning
confidence: 99%
“…We assume the scenario contains only light-of-sight components. The estimated position of agents can be obtained as in [50]. The reward function is defined as:…”
Section: B Decentralized Rlmentioning
confidence: 99%
“…Indeed, time is a key aspect for UAV networks because of their limited energy autonomy [5][6][7] and, thus, it should be properly accounted for when designing the UAV control for time-critical applications (e.g., search-and-rescue). In [5], an information-seeking algorithm is developed for extraterrestrial exploration and return-to-base application, whereas in [8,9] a similar problem is solved using RL for source localization. Algorithms for UAVs formation, navigation and self-localization have been proposed in [10][11][12][13][14], and RL for enhancing communications has been studied in [15][16][17][18].…”
Section: Introductionmentioning
confidence: 99%
“…For example, UAVs have played a central role in emergency situations in hazardous environments, for post natural disasters, or for search-and-rescue operations. In such events, UAVs have been used as a temporary network infrastructure for localization, communications, and for delivering items [1]- [3].…”
Section: Introductionmentioning
confidence: 99%
“…In this sense, machine learning (ML) can help in acquiring a knowledge of the model through experience. To that end, we adopt reinforcement learning (RL), which is based on the "trial-and-error" philosophy that allows to choose actions in order to maximize the sum of the discounted rewards over the future [3], [6]- [8]. In such settings, UAV navigation is driven by the balance between "exploration" and "exploitation".…”
Section: Introductionmentioning
confidence: 99%