2016
DOI: 10.1109/twc.2016.2524638
|View full text |Cite
|
Sign up to set email alerts
|

Toward Optimal Adaptive Wireless Communications in Unknown Environments

Abstract: Abstract-Designing efficient channel access schemes for wireless communications without any prior knowledge about the nature of environments has been a very challenging issue, in which the channel state distribution of all spectrum resources could be entirely or partially stochastic or adversarial at different time and locations. In this paper, we propose an online learning algorithm for adaptive channel access of wireless communications in unknown environments based on the theory of multi-armed bandits (MAB) … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(8 citation statements)
references
References 45 publications
0
8
0
Order By: Relevance
“…Given the open broadcast nature of the wireless channel environment and the access contention mechanism among multi-priority users, multi-armed bandit based techniques have played a special role in cognitive networks [272]- [277]. For example, Zhao et al [272] formulated a multi-armed restless bandit model for opportunistic multichannel access, which approached the maximum attainable throughput by accurately predicting which is next idle channel likely to become.…”
Section: A Multi-armed Bandit and Its Applications 1) Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Given the open broadcast nature of the wireless channel environment and the access contention mechanism among multi-priority users, multi-armed bandit based techniques have played a special role in cognitive networks [272]- [277]. For example, Zhao et al [272] formulated a multi-armed restless bandit model for opportunistic multichannel access, which approached the maximum attainable throughput by accurately predicting which is next idle channel likely to become.…”
Section: A Multi-armed Bandit and Its Applications 1) Methodsmentioning
confidence: 99%
“…In [275], a channel selection scheme was investigated which was capable of adapting to the link quality and hence finding the optimal channel for avoiding interferences and deep fading. Moreover, Gwon et al [273] and Zhou et al [277] further considered the choice of access strategy in the presence of both legitimate desired users and jamming cognitive radio nodes, which was resilient to adaptive jamming attacks with different strengths spanning from near no-attack to the full-attack across the entire spectrum. In contrast to only sensing and accessing a single channel, considering the correlated rewards of different arms, a sequential multi-armed bandit regime was conceived by Li et al [274] for identifying multiple channels to be sensed in a carefully coordinated order.…”
Section: A Multi-armed Bandit and Its Applications 1) Methodsmentioning
confidence: 99%
“…By utilizing the spectrum waterfall representation, an anti-jamming scheme based on deep reinforcement learning (RL) method was proposed in [19] to facilitate the channel-selection process. A multi-armed bandit framework was formulated in [20] to obtain efficient channel-selection strategies. In [21], a multi-domain anti-jamming scheme that tackles both power control and channel selection was proposed for heterogeneous wireless networks.…”
Section: A Jamming Attacks and Related Countermeasuresmentioning
confidence: 99%
“…By utilizing the representation of spectrum waterfall, an anti-jamming scheme based on deep reinforcement learning (RL) method was proposed in [13] to facilitate the channel-selection process. A multi-armed bandit framework was formulated in [14] to obtain efficient channelselection strategies. In [15], a multi-domain anti-jamming scheme that tackles both power control and channel selection was proposed in heterogeneous wireless networks.…”
Section: Arxiv:210511868v1 [Eesssp] 25 May 2021mentioning
confidence: 99%