Rideshare platforms such as Uber and Lyft dynamically dispatch drivers to match riders' requests. We model the dispatching process in rideshare as a Markov chain that takes into account the geographic mobility of both drivers and riders over time. Prior work explores dispatch policies in the limit of such Markov chains; we characterize when this limit assumption is valid, under a variety of natural dispatch policies. We give explicit bounds on convergence in general, and exact (including constants) convergence rates for special cases. Then, on simulated and real transit data, we show that our bounds characterize convergence rates-even when the necessary theoretical assumptions are relaxed. Additionally these policies compare well against a standard reinforcement learning algorithm which optimizes for profit without any convergence properties.
IntroductionRideshare firms such as Uber, Lyft, and Didi Chuxing dynamically match riders to drivers via an online, digital platform. Riders request a driver through an online portal or mobile app; a driver is matched by the platform to a rider based on geographic proximity, driver preferences, pricing, and other factors. The rideshare driver then picks up the rider at her request location, transfers her to her destination, and reenters the platform to be matched again-albeit at a new geographic location. Part of the larger sharing economy, rideshare firms are increasingly competitive against traditional taxi services due to their ease of use, lower pricing, and immediacy of service [14].Matching riders to rideshare drivers is nontrivial. While the core process is a form of the well-studied online matching problem [23], current models developed in the EconCS, AI, and Operations Research communities [28,11,5] do not completely capture the mobile aspects of both the drivers and riders. Drivers and riders are agents who move about a constrained space (e.g., city streets), becoming (in)active periodically due to the matching process. When a platform receives a request, it must make a near-real-time dispatch decision amongst nearby drivers who are available at the current time. The platform's goal is to maximize an objective (e.g., revenue or throughput) by servicing requests in an online fashion, subject to various realworld constraints and challenges like setting prices, predicting supply and demand, fairness considerations, competing with other firms, and so on [8,17,4].In this paper, we study the dynamics of the nascent rideshare market under different dispatch strategies. Recent work uses Markov chains to model complex ride-sharing dynamics in a closed-world system-that is, a system with a fixed total supply of cars [28,4,5]. These assume the Markov chains reach their stationary distributions quickly, and thus all prior work optimizes for dispatch strategies in the limit. As a complement, the present paper characterizes-theoretically and empirically-when that limit assumption is valid under a variety of natural strategies.