Semi-empirical models based on in situ geotechnical tests have been the standard-of-practice for predicting soil liquefaction since 1971. More recently, prediction models based on free, readily available data were proposed. These “geospatial” models rely on satellite remote-sensing to infer subsurface traits without in situ tests. Using 15,223 liquefaction case-histories from 24 earthquakes, this study assesses the performance of 23 models based on geotechnical or geospatial data using standardized metrics. Uncertainty due to finite sampling of case-histories is accounted for and used to establish statistical significance. Geotechnical predictions are significantly more efficient on a global scale, yet successive models proposed over the last 20 years show little or no demonstrable improvement. In addition, geospatial models perform equally well for large subsets of the data—a provocative finding given the relative time- and cost-requirements underlying these predictions. Through this performance comparison, lessons for improving each class of model are elucidated in detail.
Earthquakes occurring over the past decade in the Canterbury region of New Zealand have resulted in liquefaction case-history data of unprecedented quantity. This provides the profession with a unique opportunity to advance the prediction of liquefaction occurrence and consequences. Toward that end, this article presents a curated dataset containing ∼15,000 cone-penetration-test-based liquefaction case histories compiled from three earthquakes in Canterbury. The compiled, post-processed data are presented in a dense array structure, allowing researchers to easily access and analyze a wealth of information pertinent to free-field liquefaction response (i.e. triggering and surface manifestation). Research opportunities using these data include, but are not limited to, the training or testing of new and existing liquefaction-prediction models. The many methods used to obtain and process the case-history data are detailed herein, as is the structure of the compiled digital file. Finally, recommendations for analyzing the data are outlined, including nuances and limitations that users should carefully consider.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.