Agencies that manage mobile photo enforcement (MPE) programmes must decide where and when to send their limited resources to monitor compliance with speed limits. Usually, the goal is to select locations based on a number of concerns (i.e., high collision sites, high speed violation sites, school zones, etc.), which, in most cases, is conflicting. If certain locations are given more MPE resources, then by definition, other locations will receive less attention, and vice versa. This paper aims to provide insights about such MPE programme trade-offs. We present a systematic procedure for interpreting the results of a multiobjective MPE resource allocation problem. The procedure consists of three steps: (a) Pareto front (PF) generation, (b) front representation, and (c) trade-off analysis. First, in generating a PF, we sequentially apply two well-known scalar optimization methods to obtain a comprehensive set of Paretooptimal solutions. Second, the K-medoids clustering algorithm and the silhouette index are adopted to partition the generated PF into similar-sized clusters, in order to help MPE programme agencies choose from a reduced set of solutions on the PF. Third, we use the response surface method to determine trade-off patterns on the PF. The results of the front generation analysis showed that applying two optimization methods together resulted in a nearly complete PF with a relatively uniform and dense spread of solutions. Consequently, the identified set of solutions (i.e., 13,210 cases) was further partitioned into 12 clusters by silhouette index and K-medoids. With the aim of reducing decision fatigue for agencies, each cluster's representative solution is considered a possible MPE resource allocation candidate. The trade-off analysis indicated how much one must sacrifice in the other objectives in order to increase attainment of one particular objective. Finally, the trade-off rate and elasticity were used to explore the quantitative relationship between the considered objectives. KEYWORDS mobile photo enforcement programme planning, multiobjective optimization, Pareto front analysis, resource allocation, trade-off analysis
Autonomous vehicles (AVs) are expected to operate on mobility-on-demand (MoD) platforms because AV technology enables flexible self-relocation and system-optimal coordination. Unlike the existing studies, which focus on MoD with pure AV fleet or conventional vehicles (CVs) fleet, we aim to optimize the real-time fleet management of an MoD system with a mixed autonomy of CVs and AVs. We consider a realistic case that heterogeneous boundedly rational drivers may determine and learn their relocation strategies to improve their own compensation. In contrast, AVs are fully compliant with the platform’s operational decisions. To achieve a high level of service provided by a mixed fleet, we propose that the platform prioritizes human drivers in the matching decisions when on-demand requests arrive and dynamically determines the AV relocation tasks and the optimal commission fee to influence drivers’ behavior. However, it is challenging to make efficient real-time fleet management decisions when spatiotemporal uncertainty in demand and complex interactions among human drivers and operators are anticipated and considered in the operator’s decision making. To tackle the challenges, we develop a two-sided multiagent deep reinforcement learning (DRL) approach in which the operator acts as a supervisor agent on one side and makes centralized decisions on the mixed fleet, and each CV driver acts as an individual agent on the other side and learns to make decentralized decisions noncooperatively. We establish a two-sided multiagent advantage actor-critic algorithm to simultaneously train different agents on the two sides. For the first time, a scalable algorithm is developed here for mixed fleet management. Furthermore, we formulate a two-head policy network to enable the supervisor agent to efficiently make multitask decisions based on one policy network, which greatly reduces the computational time. The two-sided multiagent DRL approach is demonstrated using a case study in New York City using real taxi trip data. Results show that our algorithm can make high-quality decisions quickly and outperform benchmark policies. The efficiency of the two-head policy network is demonstrated by comparing it with the case using two separate policy networks. Our fleet management strategy makes both the platform and the drivers better off, especially in scenarios with high demand volume. History: This paper has been accepted for the Transportation Science Special Issue on Emerging Topics in Transportation Science and Logistics. Funding: This work was supported by the Singapore Ministry of Education Academic Research [Grant MOE2019-T2-2-165] and the Singapore Ministry of Education [Grant R-266-000-135-114].
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.