Forecasting the aftershock probability has been performed by the authorities to mitigate hazards in the disaster area after a main shock. However, despite the fact that most of large aftershocks occur within a day from the main shock, the operational forecasting has been very difficult during this time-period due to incomplete recording of early aftershocks. Here we propose a real-time method for efficiently forecasting the occurrence rates of potential aftershocks using systematically incomplete observations that are available in a few hours after the main shocks. We demonstrate the method's utility by retrospective early forecasting of the aftershock activity of the 2011 Tohoku-Oki Earthquake of M9.0 in Japan. Furthermore, we compare the results by the real-time data with the compiled preliminary data to examine robustness of the present method for the aftershocks of a recent inland earthquake in Japan.
Forecasting aftershock probabilities, as early as possible after a main shock, is required to mitigate seismic risks in the disaster area. In general, aftershock activity can be complex, including secondary aftershocks or even triggering larger earthquakes. However, this early forecasting implementation has been difficult because numerous aftershocks are unobserved immediately after the main shock due to dense overlapping of seismic waves. Here we propose a method for estimating parameters of the epidemic type aftershock sequence (ETAS) model from incompletely observed aftershocks shortly after the main shock by modeling an empirical feature of data deficiency. Such an ETAS model can effectively forecast the following aftershock occurrences. For example, the ETAS model estimated from the first 24 h data after the main shock can well forecast secondary aftershocks after strong aftershocks. This method can be useful in early and unbiased assessment of the aftershock hazard.
Because aftershock occurrences can cause significant seismic risks for a considerable time after the main shock, prospective forecasting of the intermediate-term aftershock activity as soon as possible is important. The epidemic-type aftershock sequence (ETAS) model with the maximum likelihood estimate effectively reproduces general aftershock activity including secondary or higher-order aftershocks and can be employed for the forecasting. However, because we cannot always expect the accurate parameter estimation from incomplete early aftershock data where many events are missing, such forecasting using only a single estimated parameter set (plug-in forecasting) can frequently perform poorly. Therefore, we here propose Bayesian forecasting that combines the forecasts by the ETAS model with various probable parameter sets given the data. By conducting forecasting tests of 1 month period aftershocks based on the first 1 day data after the main shock as an example of the early intermediate-term forecasting, we show that the Bayesian forecasting performs better than the plug-in forecasting on average in terms of the log-likelihood score. Furthermore, to improve forecasting of large aftershocks, we apply a nonparametric (NP) model using magnitude data during the learning period and compare its forecasting performance with that of the Gutenberg-Richter (G-R) formula. We show that the NP forecast performs better than the G-R formula in some cases but worse in other cases. Therefore, robust forecasting can be obtained by employing an ensemble forecast that combines the two complementary forecasts. Our proposed method is useful for a stable unbiased intermediate-term assessment of aftershock probabilities.
The time histogram is a fundamental tool for representing the inhomogeneous density of event occurrences such as neuronal firings. The shape of a histogram critically depends on the size of the bins that partition the time axis. In most neurophysiological studies, however, researchers have arbitrarily selected the bin size when analyzing fluctuations in neuronal activity. A rigorous method for selecting the appropriate bin size was recently derived so that the mean integrated squared error between the time histogram and the unknown underlying rate is minimized (Shimazaki & Shinomoto, 2007). This derivation assumes that spikes are independently drawn from a given rate. However, in practice, biological neurons express non-Poissonian features in their firing patterns, such that the spike occurrence depends on the preceding spikes, which inevitably deteriorate the optimization. In this letter, we revise the method for selecting the bin size by considering the possible non-Poissonian features. Improvement in the goodness of fit of the time histogram is assessed and confirmed by numerically simulated non-Poissonian spike trains derived from the given fluctuating rate. For some experimental data, the revised algorithm transforms the shape of the time histogram from the Poissonian optimization method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.