A new plate model is used to analyze the mean seismicities of seven types of plate boundary (CRB, continental rift boundary; CTF, continental transform fault; CCB, continental convergent boundary; OSR, oceanic spreading ridge; OTF, oceanic transform fault; OCB, oceanic convergent boundary; SUB, subduction zone). We compare the platelike (nonorogen) regions of model PB2002 (Bird, 2003) with the centroid moment tensor (CMT) catalog to select apparent boundary half-widths and then assign 95% of shallow earthquakes to one of these settings. A tapered Gutenberg-Richter model of the frequency/moment relation is fit to the subcatalog for each setting by maximum likelihood. Best-fitting b values range from 0.53 to 0.92, but all 95% confidence ranges are consistent with a common value of 0.61-0.66. To better determine some corner magnitudes we expand the subcatalogs by (1) inclusion of orogens and (2) inclusion of years 1900-1975 from the catalog of Pacheco and Sykes (1992). Combining both earthquake statistics and the platetectonic constraint on moment rate, corner magnitudes include the following:
We have initially developed a time-independent forecast for southern California by smoothing the locations of magnitude 2 and larger earthquakes. We show that using small m Ն2 earthquakes gives a reasonably good prediction of m Ն5 earthquakes. Our forecast outperforms other time-independent models (Kagan and Jackson, 1994;Frankel et al., 1997), mostly because it has higher spatial resolution. We have then developed a method to estimate daily earthquake probabilities in southern California by using the Epidemic Type Earthquake Sequence model (Kagan and Knopoff, 1987;Ogata, 1988;Kagan and Jackson, 2000). The forecasted seismicity rate is the sum of a constant background seismicity, proportional to our timeindependent model, and of the aftershocks of all past earthquakes. Each earthquake triggers aftershocks with a rate that increases exponentially with its magnitude and decreases with time following Omori's law. We use an isotropic kernel to model the spatial distribution of aftershocks for small (m Յ5.5) mainshocks. For larger events, we smooth the density of early aftershocks to model the density of future aftershocks. The model also assumes that all earthquake magnitudes follow the Gutenberg-Richter law with a uniform b-value. We use a maximum likelihood method to estimate the model parameters and test the short-term and time-independent forecasts. A retrospective test using a daily update of the forecasts between 1 January 1985 and 10 March 2004 shows that the short-term model increases the average probability of an earthquake occurrence by a factor 11.5 compared with the time-independent forecast.
We present long‐term and short‐term forecasts for magnitude 5.8 and larger earthquakes. We discuss a method for optimizing both procedures and testing their forecasting effectiveness using the likelihood function. Our forecasts are expressed as the rate density (that is, the probability per unit area and time) anywhere on the Earth. Our forecasts are for scientific testing only; they are not to be construed as earthquake predictions or warnings, and they carry no official endorsement. For our long‐term forecast we assume that the rate density is proportional to a smoothed version of past seismicity (using the Harvard CMT catalogue). This is in some ways antithetical to the seismic gap model, which assumes that recent earthquakes deter future ones. The estimated rate density depends linearly on the magnitude of past earthquakes and approximately on a negative power of the epicentral distance out to a few hundred kilometres. We assume no explicit time dependence, although the estimated rate density will vary slightly from day to day as earthquakes enter the catalogue. The forecast applies to the ensemble of earthquakes during the test period. It is not meant to predict any single earthquake, and no single earthquake or lack of one is adequate to evaluate such a hypothesis. We assume that 1 per cent of all earthquakes are surprises, assumed uniformly likely to occur in those areas with no earthquakes since 1977. We have made specific forecasts for the calendar year 1999 for the Northwest Pacific and Southwest Pacific regions, and we plan to expand the forecast to the whole Earth. We test the forecast against the earthquake catalogue using a likelihood test and present the results. Our short‐term forecast, updated daily, makes explicit use of statistical models describing earthquake clustering. Like the long‐term forecast, the short‐term version is expressed as a rate density in location, magnitude and time. However, the short‐term forecasts will change significantly from day to day in response to recent earthquakes. The forecast applies to main shocks, aftershocks, aftershocks of aftershocks, and main shocks preceded by foreshocks. However, there is no need to label each event, and the method is completely automatic. According to the model, nearly 10 per cent of moderately sized earthquakes will be followed by larger ones within a few weeks.
Can the time, location, and magnitude of future earthquakes be predicted reliably and accurately? In their Perspective, Geller et al .'s answer is “no.” Citing recent results from the physics of nonlinear systems “chaos theory,” they argue that any small earthquake has some chance of cascading into a large event. According to research cited by the authors, whether or not this happens depends on unmeasurably fine details of conditions in Earth's interior. Earthquakes are therefore inherently unpredictable. Geller et al . suggest that controversy over prediction lingers because prediction claims are not stated as objectively testable scientific hypotheses, and due to overly optimistic reports in the mass media.
SUMMARY An accumulation of seismic moment data gathered over the previous decade justifies a new attempt at a comprehensive statistical analysis of these data: herein, more rigourous statistical techniques are introduced, their properties investigated, and these methods are employed for analysis of large modern data sets. Several theoretical distributions of earthquake size (seismic moment–frequency relations) are described and compared. We discuss the requirements for such distributions and introduce an upper bound or a ‘corner moment’ for a distribution to have a finite energy or moment flux. We derive expressions for probability density functions and statistical moments of the distributions. We also describe parameter evaluation, in particular how to estimate the seismic moment distribution for the largest earthquakes. Simulating earthquake size distributions allows for a more rigourous evaluation of distribution parameters and points to the limitations of the classical statistical analysis of earthquake data. Simulations suggest that several earthquakes approaching or exceeding the corner magnitude (mc) limit need to be registered to evaluate mc with reasonable accuracy. Using the Harvard catalogue data, we compare moment distribution parameters for various temporal spans of the catalogue, for different tectonic provinces and depth ranges, and for earthquakes with various focal mechanisms. The statistical analysis suggests that the exponent β is universal (β=0.60–0.65) for all moderate earthquakes. The corner moment (Mc) value, determined by the maximum‐likelihood method, both in subduction zones and globally, is about 1021 N m, corresponding to the corner moment magnitude mc≈8.0. For mid‐oceanic earthquakes, mc is apparently smaller for spreading ridges, it is about 5.8, and for strike‐slip earthquakes on transform faults it decreases from 7.2 to 6.5 as the relative slip velocity of faults increases. We investigate the seismic moment errors, both random and systematic, and their dependence on earthquake size. The relative errors seem to decrease for larger events. The influence of moment uncertainties on the parameter estimates is studied. Whereas the β values do not appear to be significantly influenced by the errors, for the corner moment large errors can lead to substantially biased estimates. We compare the Harvard catalogue results with the earthquake data from instrumental catalogues in the first three‐quarters of the 20th century. Several very large earthquakes (m≥9) occurred around the middle of the century. Their magnitude values cannot be fitted by a modified Gutenberg–Richter law with mc=8.0–8.5. Among other factors, this discrepancy can be explained by either substantially higher errors in the earlier magnitude values, or by mc being higher for some subduction zones. It is unlikely that data available presently or soon will be sufficient to determine the corner magnitude of 9 and above, with reasonable precision, using purely statistical methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.