2011 Eighth International Conference on Information Technology: New Generations 2011
DOI: 10.1109/itng.2011.138
|View full text |Cite
|
Sign up to set email alerts
|

Simulated Annealing Based Hierarchical Q-Routing: A Dynamic Routing Protocol

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 8 publications
0
5
0
Order By: Relevance
“…To assess the performance of the proposed method, we compare it against the state-of-the-art Q-routing algorithms discussed in section I, including (i) Random Exploration-Exploitation Routing (REE-Routing), (ii) Probabilistic Ex-ploration Routing (PE-Routing), (iii) Conventional [18], (iv) Adaptive learning rates Full-Echo Q-Routing (AFEQ-Routing) [24], and (v) Simulated Annealing based Q-routing (SAHQ-Routing) [54]. Methods (i) and (ii) are simulated for the sake of comparison only.…”
Section: Simulation Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…To assess the performance of the proposed method, we compare it against the state-of-the-art Q-routing algorithms discussed in section I, including (i) Random Exploration-Exploitation Routing (REE-Routing), (ii) Probabilistic Ex-ploration Routing (PE-Routing), (iii) Conventional [18], (iv) Adaptive learning rates Full-Echo Q-Routing (AFEQ-Routing) [24], and (v) Simulated Annealing based Q-routing (SAHQ-Routing) [54]. Methods (i) and (ii) are simulated for the sake of comparison only.…”
Section: Simulation Resultsmentioning
confidence: 99%
“…This gain is achieved by controlling Fig. 7: The average of the temperature parameter T over time for the proposed method, as well as the baseline method with non-adaptive temperature [54] for networks with different average speeds.…”
Section: Simulation Resultsmentioning
confidence: 99%
“…With the decrease of temperature, the global optimal solution is explored randomly in the solution space according to the Metropolis criterion to avoid falling into local optimum. For the routing problem, SA is combined into Q-routing algorithm which has self-adaptive learning rate for dynamic exploration [36,37]. We utilize SA algorithm to dynamically adjust the exploration rate and dynamic greedy factor for RL through the temperature decline rate.…”
Section: Related Workmentioning
confidence: 99%
“…Simulation results show that, PQ-routing is superior to Q-routing in terms of both learning speed and adaptability. There also many other extensions of Q-routing, e.g., full echo Q-routing [220], dual reinforcement Q-routing [221], ant-based Q-Routing [222], gradient ascent Q-routing [223], Q-probabilistic routing [224] and simulated annealing based hierarchical Q-routing [225]. In [142], a self-learning routing protocol based on reinforcement learning (RLSRP), specific for the FANET, is studied.…”
Section: • Intelligent Algorithms Boost the Communication And Networkingmentioning
confidence: 99%
“…The intelligent algorithms could be exploited to improve the performance of the communication and networking from different layers of the network. USL; USoS [142], [210]- [225] Multi-agent decisions reduce the communication overhead: Multi-agent decision can be introduced to let UAVs decide whether to, when to, what to, and whom to communicate. It makes the communication more precise, and the most needed information being transmitted to the most wanted nodes at the most appropriate time can be achieved.…”
Section: Intelligent Algorithms Boost the Communication And Networkingmentioning
confidence: 99%