2012 International Symposium on Wireless Communication Systems (ISWCS) 2012
DOI: 10.1109/iswcs.2012.6328420
|View full text |Cite
|
Sign up to set email alerts
|

ALOHA and Q-Learning based medium access control for Wireless Sensor Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
54
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 61 publications
(54 citation statements)
references
References 12 publications
0
54
0
Order By: Relevance
“…In WSNs, a number of sensors cooperate to efficiently transfer data. Therefore, designing MAC protocols for WSNs poses different challenges from typical wireless networks, as well as energy consumption and latency [112]. Also, the duty cycle (i.e., fraction of time that a sensor node is active) of the node has to be controlled to conserve energy.…”
Section: E Medium Access Control (Mac)mentioning
confidence: 99%
See 1 more Smart Citation
“…In WSNs, a number of sensors cooperate to efficiently transfer data. Therefore, designing MAC protocols for WSNs poses different challenges from typical wireless networks, as well as energy consumption and latency [112]. Also, the duty cycle (i.e., fraction of time that a sensor node is active) of the node has to be controlled to conserve energy.…”
Section: E Medium Access Control (Mac)mentioning
confidence: 99%
“…Similarly, Chu et al [112] integrated slotted ALOHA and Q-Learning algorithms to introduce a new MAC protocol for WSNs, called "ALOHA and Q-Learning based MAC with Informed Receiving" (ALOHA-QIR). ALOHA-QIR inherits the features of both ALOHA and Q-Learning to achieve the benefits of simple design, low resource requirements and low collision probability.…”
Section: E Medium Access Control (Mac)mentioning
confidence: 99%
“…We introduce a separate DQN agent for each output variable in A t defined as action A t k selected by the kth agent, where each kth agent is responsible to update the value Q(S t , A t k ; θ k ) of action A t k in shared 6 The action aggregation has been rarely evaluated, but the same idea, namely, state aggregation has been well studied, which is a basic function approximation approach [31]. 7 The structures of value function approximator can also be specifically designed for RL agents with sub-tasks of significantly different complexity. However, there is no such requirement in our problem, so it will not be considered.…”
Section: B Cooperative Multi-agent Learning Approachmentioning
confidence: 99%
“…In order to consider more complex and practical formulations, Reinforcement Learning (RL) emerges as a natural solution given its capability in interacting with the practical environment and feedback in the form of the number of successful and unsuccessful transmissions per TTI. Distributed RL based on tabular Q-learning (tabular-Q) has been proposed in [6][7][8][9]. In [6][7][8], the authors studied distributed tabular-Q in slotted-Aloha networks, where each device learns how to avoid collisions by finding a proper time slot to transmit packets.…”
Section: Introductionmentioning
confidence: 99%
“…RL-based protocols significantly reduce the energy consumption due to both idle listening and overhearing in the context of duty cycling. ALOHA and Q-Learning have been integrated to establish a new MAC protocol, namely ALOHA-Q [3]. ALOHA-based techniques are important for certain categories of Wireless Personal Networks (WPNs) and…”
Section: Reinforcement Learning (Rl) Has Been Recently Applied To Desmentioning
confidence: 99%