Traffic congestion costs drivers an average of $1,200 a year in wasted fuel and time, with most travelers becoming less tolerant of unexpected delays. Substantial efforts have been made to account for the impact of non-recurring sources of congestion on travel time reliability. The 6th edition of the Highway Capacity Manual (HCM) provides a structured guidance on a step-by-step analysis to estimate reliability performance measures on freeway facilities. However, practical implementation of these methods poses its own challenges. Performing these analyses requires assimilation of data scattered in different platforms, and this assimilation is complicated further by the fact that data and data platforms differ from state to state. This paper focuses on practical calibration and validation methods of the core and reliability analyses described in the HCM. The main objective is to provide HCM users with guidance on collecting data for freeway reliability analysis as well as validating the reliability performance measures predictions of the HCM methodology. A real-world case study on three routes on Interstate 40 in the Raleigh-Durham area in North Carolina is used to describe the steps required for conducting this analysis. The travel time index (TTI) distribution, reported by the HCM models, was found to match those from probe-based travel time data closely up to the 80th percentile values. However, because of a mismatch between the actual and HCM estimated incident allocation patterns both spatially and temporally, and the fact that traffic demands in the HCM methods are by default insensitive to the occurrence of major incidents, the HCM approach tended to generate larger travel time values in the upper regions of the travel time distribution.
This study developed a validation procedure for a dynamic traffic assignment–based mesoscopic simulator and identified its relevant data requirements. Model inconsistencies at the network level can be detected through route-based validation with probe vehicle travel time data. Simultaneously, link-based validation uses point detector–based volume and speed data. A procedure for filtering the validation data set to make it consistent and comparable with the simulation results is outlined in the paper. Two assignment methods were tested with a detailed validation data set through a case study. The no-diversion assignment approach yielded higher travel time than the field-observed average time. In contrast, simulated travel time with the optimal diversion method implied a more rational behavior by drivers than was observed in the field, along with more diversion of trips on arterial routes in the vicinity of a work zone. Finally, a link-based performance comparison revealed worse queue spillback effects on the upstream links of the work zone than in reality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.