2022
DOI: 10.1016/j.ins.2022.06.056
|View full text |Cite
|
Sign up to set email alerts
|

QMOEA: A Q-learning-based multiobjective evolutionary algorithm for solving time-dependent green vehicle routing problems with time windows

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 62 publications
(8 citation statements)
references
References 45 publications
0
3
0
Order By: Relevance
“…Individuals who are stronger (more suited to the environment) than the competition are more likely to generate offsprings who can better survive. A hot research area in recent years is applying multiobjective evolutionary algorithms (MOEA) to solving VRPs with multiple objectives [300,299,169,321,322,227]. Swarm intelligence [142] is a field of study that examines natural and artificial systems made up of numerous individuals that cooperate with each other.…”
Section: Metaheuristicsmentioning
confidence: 99%
“…Individuals who are stronger (more suited to the environment) than the competition are more likely to generate offsprings who can better survive. A hot research area in recent years is applying multiobjective evolutionary algorithms (MOEA) to solving VRPs with multiple objectives [300,299,169,321,322,227]. Swarm intelligence [142] is a field of study that examines natural and artificial systems made up of numerous individuals that cooperate with each other.…”
Section: Metaheuristicsmentioning
confidence: 99%
“…Zhang et al (2020) used a hybrid multi-objective evolution algorithm (HMOEA-GL) approach to achieve this goal. Another goal of VRPTW is to minimize the total duration of the vehicle, energy consumption, customer satisfaction, total service time and minimize mileage, which has been discussed by Qi et al (2022), Wang et al (2019), Pérez-Rodríguez and Hernández-Aguirre (2019), Semiz and Polat (2020) In previous research, other goals were also discussed, such as improving the performance of the algorithm proposed by Dong et al (2018), proving the effectiveness of the solution method and stochastic models, mathematical proposals for new models, and developing new models have been discussed by Errico et al, (2018), Truden et al (2022), Srivastava et al, (2021), Ticha et al (2019), Nguyen et al (2016), andBaños et al (2016).…”
Section: Library Reviewmentioning
confidence: 99%
“…This space is ranged from 50 to 1000 with a step size of 10 for iterations and 5 to 100 with a step size of 1 for the number of wolves. An initial Q-table [19], with dimensions corresponding to the length of the parameter space, is randomly initialized with values between -1 and 1. Subsequently, QL parameters are set: a learning rate (α) of 0.5, a discount factor (γ) of 0.9, an exploration rate (ε) of 0.1, and a total of 2000 training episodes.…”
Section: Ql For Parameter Settingsmentioning
confidence: 99%