A team of earthquake geologists, seismologists, and engineering seismologists has collectively produced an update of the national probabilistic seismic hazard (PSH) model for New Zealand (National Seismic Hazard Model, or NSHM). The new NSHM supersedes the earlier NSHM published in 2002 and used as the hazard basis for the New Zealand Loadings Standard and numerous other end-user applications. The new NSHM incorporates a fault source model that has been updated with over 200 new onshore and offshore fault sources and utilizes new New Zealand-based and international scaling relationships for the parameterization of the faults. The distributed seismicity model has also been updated to include post-1997 seismicity data, a new seismicity regionalization, and improved methodology for calculation of the seismicity parameters. Probabilistic seismic hazard maps produced from the new NSHM show a similar pattern of hazard to the earlier model at the national scale, but there are some significant reductions and increases in hazard at the regional scale. The national-scale differences between the new and earlier NSHM appear less than those seen between much earlier national models, indicating that some degree of consistency has been achieved in the national-scale pattern of hazard estimates, at least for return periods of 475 years and greater.Online Material: Table of fault source parameters for the 2010 national seismichazard model.
The Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced both to test the validity of their assumptions and explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but here we focus on statistical methods using future earthquake data only. We envision two evaluations: a self-consistency test, and comparison of every pair of models for relative consistency. Both tests are based on the likelihood ratio method, and both would be fully prospective (that is, the models are not adjusted to fit the test data). To be tested, each model must assign a probability or probability density to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified "bins" with location, magnitude, time and in some cases focal mechanism limits. IntroductionTo predict the behavior of a system is the desired proof of a model of this system. Seismology cannot predict earthquake occurence, however, it should seek for the best possible models to forecast earthquake occurence as precise as possible. This paper describes the rules of an experiment to examine or test earthquake forecasts in a statistical way. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which the models might be assigned weights in a future consensus model or be judged as suitable for particular areas.To test models against one another, we require that forecasts based on them can be expressed numerically in a standard format. That format is the average rate of earthquake occurrence within pre-specified limits of hypocentral latitude, longitude, magnitude, and time. For some source models there will also be 1
The five-year experiment of the Regional Earthquake Likelihood Models (RELM) working group was designed to compare several prospective forecasts of earthquake rates in latitude-longitude-magnitude bins in and around California. This forecast format is being used as a blueprint for many other earthquake predictability experiments around the world, and therefore it is important to consider how to evaluate the performance of such forecasts. Two tests that are currently used are based on the likelihood of the observed distribution of earthquakes given a forecast; one test compares the binned space-rate-magnitude observation and forecast, and the other compares only the rate forecast and the number of observed earthquakes. In this article, we discuss a subtle flaw in the current test of rate forecasts, and we propose two new tests that isolate the spatial and magnitude component, respectively, of a space-ratemagnitude forecast. For illustration, we consider the RELM forecasts and the distribution of earthquakes observed during the first half of the ongoing RELM experiment. We show that a space-rate-magnitude forecast may appear to be consistent with the distribution of observed earthquakes despite the spatial forecast being inconsistent with the spatial distribution of observed earthquakes, and we suggest that these new tests should be used to provide increased detail in earthquake forecast evaluation. We also discuss the statistical power of each of the likelihood-based tests and the stability (with respect to earthquake catalog uncertainties) of results from the likelihoodbased tests.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.