W e model the decision-making process of callers in call centers as an optimal stopping problem. After each waiting period, a caller decides whether to abandon a call or continue to wait. The utility of a caller is modeled as a function of her waiting cost and reward for service. We use a random-coefficients model to capture the heterogeneity of the callers and estimate the cost and reward parameters of the callers using the data from individual calls made to an Israeli call center. We also conduct a series of counterfactual analyses that explore the effects of changes in service discipline on resulting waiting times and abandonment rates. Our analysis reveals that modeling endogenous caller behavior can be important when major changes (such as a change in service discipline) are implemented and that using a model with an exogenously specified abandonment distribution may be misleading.
We undertake an empirical study of the impact of delay announcements on callers' abandonment behavior and the performance of a call center with two priority classes. A Cox regression analysis reveals that in this call center, callers' abandonment behavior is affected by the announcement messages heard. To account for this, we formulate a structural estimation model of callers' (endogenous) abandonment decisions. In this model, callers are forward-looking utility maximizers and make their abandonment decisions by solving an optimal stopping problem. Each caller receives a reward from service and incurs a linear cost of waiting. The reward and per-period waiting cost constitute the structural parameters that we estimate from the data of callers' abandonment decisions as well as the announcement messages heard. The call center performance is modeled by a Markovian approximation. The main methodological contribution is the definition of an equilibrium in steady state as one where callers' expectation of their waiting time, which affects their (rational) abandonment behavior, matches their actual waiting time in the call center, and its characterization as the solution of a set of non-linear equations. A counterfactual analysis shows that callers react to longer delay announcements by abandoning earlier, that less patient callers as characterized by their reward and cost parameters react more to delay announcements, and that congestion in the call center at the time of the call affects caller reactions to delay announcements.
Designing modern call centers requires an understanding of callers’ patience and abandonment behavior. Using a Cox regression analysis, we show that callers’ abandonment behavior may differ based on their contact history, and changes across their different contacts. We control for caller heterogeneity using a two-step grouped-fixed effect method. This analysis shows that differences in callers’ abandonment behavior are not only driven by their heterogeneity but also by differences in their beliefs about their delays affected by their contact history. As a result, callers’ beliefs about the waiting time distribution may not match the actual distribution in the call center, and the equilibrium condition in the rational expectation equilibrium assumption may not hold. To understand callers’ prior belief about the waiting time distribution, and to disentangle the impact of changes in their beliefs driven by their contact history from the impact of their intrinsic parameters, we use a structural estimation approach in a Bayesian learning framework. We estimate the parameters of this model from a call center data set with multiple priority classes. We show that in this call center, new callers who do not have any prior experience with the call center are optimistic about their delay in the system and underestimate its length irrespective of their priority classes. We also show that our Bayesian learning model not only has a better fit to the data set compared to the rational expectation equilibrium model but also outperforms the rational expectation equilibrium model in out-of-sample tests. Our Bayesian framework not only sheds light on callers’ learning process and their beliefs about their delays, but also could leverage callers’ contact history to provide personalized patience level for callers. This personalized information enables implementation of patience-based scheduling policies studied in the queueing literature. The online appendix is available at https://doi.org/10.1287/opre.2018.1738 .
Although call centers have recently invested in callback technology, the effects of this innovation on call center performance are not clearly understood. In this paper, we take a data-driven approach to quantify the operational impact of offering callbacks under a variety of callback policies. To achieve this goal, we formulate a structural model of the caller decision-making process under a callback option and impute their underlying preferences from data. Our model estimates shed light on caller preferences under a callback option. We find that callers experience three to six times less discomfort per unit of time while waiting for callbacks than while waiting in queue, suggesting that offering callbacks can increase service quality by channeling callers to an alternative service channel where they experience less discomfort while waiting. However, after controlling for expected waiting times, callers generally prefer waiting in a queue over accepting a callback and waiting offline. This suggests that managers of this call center may want to spend efforts in educating their customers on the benefits of the callback option. Using the callers’ imputed preferences, we are able to conduct counterfactual analyses of how various callback policies affect the performance of this call center. We find that in this call center, offering to hold the callers’ spot in line or to call back within a window (guaranteed timeframe) reduces average online waiting time (the average time callers wait on the phone) by up to 71% and improves service quality by decreasing callers’ average incurred waiting cost by up to 46%. Moreover, we find that offering callbacks as a demand postponement strategy during periods of temporary congestion reduces average online waiting time by up to 86%, increases service quality by up to 54%, and increases system throughput by up to 2.1%. This paper was accepted by Vishal Gaur, operations management.
To increase revenue or improve customer service, companies are increasingly personalizing their product or service offerings based on their customers' history of interactions. In this paper, we show how call centers can improve customer service by implementing personalized priority policies. Under personalized priority policies, managers use customer contact history to predict individual-level caller abandonment and redialing behavior and prioritize them based on these predictions to improve operational performance. We provide a framework for how companies can use individual-level customer history data to capture the idiosyncratic preferences and beliefs that impact caller abandonment and redialing behavior and quantify the improvements to operational performance of these policies by applying our framework using caller history data from a real-world call center. We achieve this by formulating a structural model that uses a Bayesian learning framework to capture how callers’ past waiting times and abandonment/redialing decisions affect their current abandonment and redialing behavior and use our data to impute the callers’ underlying primitives such as their rewards for service, waiting costs, and redialing costs. These primitives allow us to simulate caller behavior under a variety of personalized priority policies and hence, collect relevant operational performance measures. We find that, relative to the first-come, first-served policy, our proposed personalized priority policies have the potential to decrease average waiting times by up to 29% or increase system throughput by reducing the percentage of service requests lost to abandonment by up to 6.3%. This paper was accepted by Vishaul Gaur, operations management.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.