2020 IEEE Wireless Communications and Networking Conference (WCNC) 2020
DOI: 10.1109/wcnc45663.2020.9120595
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Drone Mobility Support Using Reinforcement Learning

Abstract: Flying drones can be used in a wide range of applications and services from surveillance to package delivery. To ensure robust control and safety of drone operations, cellular networks need to provide reliable wireless connectivity to drone user equipments (UEs). To date, existing mobile networks have been primarily designed and optimized for serving ground UEs, thus making the mobility support in the sky challenging. In this paper, a novel handover (HO) mechanism is developed for a cellular-connected drone sy… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
50
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 47 publications
(51 citation statements)
references
References 12 publications
0
50
0
1
Order By: Relevance
“…In [24], HO measurements were reported for an aerial drone connected to an LTE network in a suburban environment. In our recent work [15], a HO mechanism based on Q-learning was proposed for a cellular-connected drone network. It was shown that a significant reduction in the number of HOs is attained while maintaining reliable connectivity.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…In [24], HO measurements were reported for an aerial drone connected to an LTE network in a suburban environment. In our recent work [15], a HO mechanism based on Q-learning was proposed for a cellular-connected drone network. It was shown that a significant reduction in the number of HOs is attained while maintaining reliable connectivity.…”
Section: Related Workmentioning
confidence: 99%
“…It was shown that a significant reduction in the number of HOs is attained while maintaining reliable connectivity. Despite the encouraging results in [15], the tabular Q-learning framework adopted in [15] may have some disadvantages, such as predefined waypoints and high storage requirements when state space is large.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations