The demand for transportation has increased significantly in recent decades in line with the increasing demand for passenger and freight mobility, especially in urban areas. One of the most negative impacts is the increasing level of traffic congestion. A possible short-term solution to solve this problem is to utilize a traffic control system. However, most traffic control systems still use classical control algorithms with the green phase sequence determined, based on a specific strategy. Studies have proven that this approach does not provide the expected congestion solution. In this paper, an adaptive traffic controller was developed that uses a reinforcement learning algorithm called deep Q-network (DQN). Since the DQN performance is determined by reward selection, an exponential reward function, based on the macroscopic fundamental diagram (MFD) of the distribution of vehicle density at intersections was considered. The action taken by the DQN is determining traffic phases, based on various rewards, ranging from pressure to adaptive loading of pressure and queue length. The reinforcement learning algorithm was then applied to the SUMO traffic simulation software to assess the effectiveness of the proposed strategy. The DQN-based control algorithm with the adaptive reward mechanism achieved the best performance with a vehicle throughput of 56,384 vehicles, followed by the classical and conventional control methods, such as Webster (50,366 vehicles), max-pressure (50,541 vehicles) and uniform (46,241 vehicles) traffic control. The significant increase in vehicle throughput achieved by the adaptive DQN-based control algorithm with an exponential reward mechanism means that the proposed traffic control could increase the area productivity, implying that the intersections could accommodate more vehicles so that the possibility of congestion was reduced. The algorithm performed remarkably in preventing congestion in a traffic network model of Central Jakarta as one of the world’s most congested cities. This result indicates that traffic control design using MFD as a performance measure can be a successful future direction in the development of reinforcement learning for traffic control systems.
We developed a procurement decision model, which takes into account partner selection and optimal order quantity with the integration of a planned production leadtime operational decision. Leadtime is taken into account in time and cost performance to achieve on-time delivery of a supply chain (SC) system. We contrast the three following original equipment manufacturer (OEM) conditions, (1) longer leadtime to the buyer but at a lower cost for the supplier, so the buyer has to add crashing cost to reduce the leadtime; (2) less leadtime to the buyer but at a higher cost for the supplier; (3) shorter leadtime to the buyer by adding crashing cost to reduce the inventory cost. Taking into consideration the impact of demand uncertainty directly to the buyers, we focus on simultaneous procedures in achieving an optimal solution. Our model considers an objective function consisting of operational costs and lead time decision under integrated supply chain entities (suppliers, sub-assembly manufacturers (OEMs), and buyers). Based on our numerical results, by trading-off inventory cost and leadtime crashing cost, the best possible combinations of partners in fulfilling demand from the market are B 1-A 2-S 2 and B 2-A 1-S 3. We found that these two combinations give $141,102.95 total profit per year to the SC system. We proposed appropriate strategies that could be applied at OEM by considering order arrival timing and leadtime.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.