2012 IEEE International Symposium on Industrial Electronics 2012
DOI: 10.1109/isie.2012.6237195
|View full text |Cite
|
Sign up to set email alerts
|

Distributed QoS routing algorithm in large scale Wireless Sensor Networks

Abstract: Abstract-This paper presents a novel routing protocol based on the Learning Automata method for large scale Wireless Sensor Networks (WSNs) codenamed DRLR (distributed reinforcement learning routing). In this method, each node is equipped with learning automata so that it can learn the best path to transmit data toward the sink. The approach proved to be efficient, reliable, and scalable. It also prevents routing hole by considering network density and average of energy levels available. The approach also incr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2013
2013
2019
2019

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…The RL algorithm has its inherent advantages and is well suitable for dealing with distributed problems [17]. This paper proposes a RL-based algorithm to provide new ideas and incentives for addressing routing issue in WSNs.…”
Section: Discussionmentioning
confidence: 99%
“…The RL algorithm has its inherent advantages and is well suitable for dealing with distributed problems [17]. This paper proposes a RL-based algorithm to provide new ideas and incentives for addressing routing issue in WSNs.…”
Section: Discussionmentioning
confidence: 99%
“…Our algorithm is compared with two different methods: i) basic routing protocol (DRLR) proposed in [13], which is without aggregation ii) ECHSSDA algorithm given in [5]. The results show that the proposed method significantly reduces energy consumption of DRLR with addition of the aggregation technique to DRLR and also it outperforms ECHSSDA, especially when the environment has low density.…”
Section: (Pj)mentioning
confidence: 99%
“…22,23 The RL algorithm has its inherent advantages and is well suitable for dealing with distributed problems. 24,25 In this algorithm, each possible action is assigned a Q-value which indicates the approximate goodness of the action. 26 In the learning process, according to the Q-value of each action, the agent selects one action.…”
Section: Introductionmentioning
confidence: 99%