2022
DOI: 10.1287/trsc.2021.1042
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Ride-Hailing with Electric Vehicles

Abstract: We consider the problem of an operator controlling a fleet of electric vehicles for use in a ride-hailing service. The operator, seeking to maximize profit, must assign vehicles to requests as they arise as well as recharge and reposition vehicles in anticipation of future requests. To solve this problem, we employ deep reinforcement learning, developing policies whose decision making uses [Formula: see text]-value approximations learned by deep neural networks. We compare these policies against a reoptimizati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
22
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 54 publications
(22 citation statements)
references
References 40 publications
0
22
0
Order By: Relevance
“…Multi-agent RL-based algorithms have also been introduced for the Dial-a-Ride Problem with multiple vehicles and stochastic orders (Qin et al 2020, Kullman et al 2020, Holler et al 2019. Qin et al (2020) and Tang et al (2019) implemented Q values in the form of one Deep Q Network (DQN) for each vehicle and used a central combinatorial optimization problem as a coordinator to assign orders to vehicles.…”
Section: Mdp-based Solution Methods For Stochastic and Dynamic Vrpsmentioning
confidence: 99%
See 1 more Smart Citation
“…Multi-agent RL-based algorithms have also been introduced for the Dial-a-Ride Problem with multiple vehicles and stochastic orders (Qin et al 2020, Kullman et al 2020, Holler et al 2019. Qin et al (2020) and Tang et al (2019) implemented Q values in the form of one Deep Q Network (DQN) for each vehicle and used a central combinatorial optimization problem as a coordinator to assign orders to vehicles.…”
Section: Mdp-based Solution Methods For Stochastic and Dynamic Vrpsmentioning
confidence: 99%
“…Qin et al (2020) and Tang et al (2019) implemented Q values in the form of one Deep Q Network (DQN) for each vehicle and used a central combinatorial optimization problem as a coordinator to assign orders to vehicles. Kullman et al (2020) adopted an attention encoder-decoder as the central coordinator and trained the model with Actor-Critic. For a similar problem, Holler et al (2019) compared Actor-Critic and DQN methods without observing significant performance differences.…”
Section: Mdp-based Solution Methods For Stochastic and Dynamic Vrpsmentioning
confidence: 99%
“…This is not the case in our multiagent charging station search setting: here, each agent terminates her search when she found at least one non-shareable available resource. Existing work on multi-agent settings for EVs mostly focuses on autonomous EV fleet management, such as ride-sharing planning (Al-Kanj et al 2020) or online requests matching for ride-hailing (Kullman et al 2021a) and do not cover stochastic resource search problems.…”
Section: Related Literaturementioning
confidence: 99%
“…On the electric vehicle operations side of the problem, many studies have focused on simple myopic policies [13,3,24,4] while others have attempted to incorporate planning for future demand [1,40,20,17], though these methods do not necessarily scale to operational size.…”
Section: Literature Reviewmentioning
confidence: 99%
“…One approach is to use approximate dynamic programming (ADP), such as [1] which uses ADP to determine when vehicles get new passengers and whether vehicles should charge. In [40] and [20], deep reinforcement learning is used to develop policies for vehicles to determine when to accept new customers and when to charge. They suggest that the learning process allows the system to anticipate future demand.…”
Section: Literature Reviewmentioning
confidence: 99%