2022
DOI: 10.1016/j.comnet.2022.108787
|View full text |Cite
|
Sign up to set email alerts
|

Throughput and latency in the distributed Q-learning random access mMTC networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 9 publications
(12 citation statements)
references
References 19 publications
0
12
0
Order By: Relevance
“…The results of Figures 4 and 5 were generated considering α$$ \alpha $$ = 0.1, a typical value found in the literature, 19‐21 as it considers that each reward sent by the central node weights only 10% in each Q$$ Q $$‐table update. However, when proposing the MPL‐QL algorithm in the NOMA scenario, it is necessary to assess whether the change in the value of α$$ \alpha $$ impacts the choice of the most suitable 𝒫.…”
Section: Numerical Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…The results of Figures 4 and 5 were generated considering α$$ \alpha $$ = 0.1, a typical value found in the literature, 19‐21 as it considers that each reward sent by the central node weights only 10% in each Q$$ Q $$‐table update. However, when proposing the MPL‐QL algorithm in the NOMA scenario, it is necessary to assess whether the change in the value of α$$ \alpha $$ impacts the choice of the most suitable 𝒫.…”
Section: Numerical Resultsmentioning
confidence: 99%
“…The performance of the proposed MPL‐QL algorithm was compared with other methods available in the literature, specifically: (a) SA, where there is no feedback from the central node to the devices. The devices only send all their UL packets and the number of successes is obtained when there is no collision; (b) Independent QL; 19 (c) Collaborative QL; 19 and (d) Packet‐Based QL 21 …”
Section: Numerical Resultsmentioning
confidence: 99%
See 3 more Smart Citations