2014
DOI: 10.3846/16484142.2014.953997
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating Staggered Working Hours Using a Multi-Agent-Based Q-Learning Model

Abstract: Staggered working hours has the potential to alleviate excessive demands on urban transport networks during the morning and afternoon peak hours and influence the travel behavior of individuals by affecting their activity schedules and reducing their commuting times. This study proposes a multi-agent-based Q-learning algorithm for evaluating the influence of staggered work hours by simulating travelers’ time and location choices in their activity patterns. Interactions among multiple travelers were also consid… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 23 publications
0
3
0
Order By: Relevance
“…To study the marginal value theorem (MVT), Miller et al [98] implemented four algorithms in a large-scale avian foraging model, including online MVT (OMVT), extended OMVT (XOMVT), reinforcementlearning MVT (RLMVT), and extended RLMVT (XRLMT), and they found that RL algorithms (RLMVT/XRLMVT) performed far better for approximating marginal values than continuous estimation or online algorithms (OMVT/XOMVT). Yang et al [99] also applied the Q-learning algorithm to model the complex time-space choice behaviors of agents in an activity-travel scheduling process.…”
Section: A Microagent Situational Awareness Learningmentioning
confidence: 99%
“…To study the marginal value theorem (MVT), Miller et al [98] implemented four algorithms in a large-scale avian foraging model, including online MVT (OMVT), extended OMVT (XOMVT), reinforcementlearning MVT (RLMVT), and extended RLMVT (XRLMT), and they found that RL algorithms (RLMVT/XRLMVT) performed far better for approximating marginal values than continuous estimation or online algorithms (OMVT/XOMVT). Yang et al [99] also applied the Q-learning algorithm to model the complex time-space choice behaviors of agents in an activity-travel scheduling process.…”
Section: A Microagent Situational Awareness Learningmentioning
confidence: 99%
“…It has been proved that agent-based simulation technology is efective, fexible, and expansible in trafc system modelling [20,21]. Yang et al [22] utilized a multiagent-based Q-learning algorithm for evaluating the infuence of SWH policy by simulating travelers' time and location choices in their activity patterns. Xie et al [23] simulated commuter departure time choices based on the BM reinforcement learning model in a many-to-one bus transit scenario.…”
Section: Introductionmentioning
confidence: 99%
“…It is difficult to find a tool for evaluating short-term, medium-term and long-term effects of a potential TDM strategy. Among those offering such tools, [50] proposed a multi-agent based Q-learning algorithm to evaluate the effects of In empirical analysis, the regression analysis, or its variants, is still a technique used most. To extract origins and destinations, search for matched trips, etc.…”
Section: Methodsmentioning
confidence: 99%