Cover Page

Statistical Methods for Earthquakes Set

coordinated by
Nikolaos Limnios, Eleftheria Papadimitriou and Georgios Tsaklidis

Volume 1

Earthquake Occurrence

Short- and Long-term Models and their Validation

Rodolfo Console

Maura Murru

Giuseppe Falcone

Batch1_image_3_8.jpg

Foreword

This volume presents models of earthquake occurrence that are of incredible importance in earthquake forecasting. The authors of this text are among the first to systematically study and utilize these models for assessing seismic hazard in diverse seismically active regions. Within this book an integrated view of earthquakes catalogs, their detailed descriptions and analysis related to short- and long-term earthquake occurrence models and their validation is given.

One of the most important insights in earthquake forecasting is the development of statistical models, because a deterministic approach in determining the occurrence of an anticipated strong earthquake is perhaps the most difficult problem to be solved, since earthquake occurrence is characterized by clustering in space and time, at different scales, reflecting the complexity of the geodynamic processes.

Earthquake forecasting necessitates the development of stochastic models for earthquake occurrence, which observations have revealed is far from exhibiting regular recurrence, and which constitutes a critically important problem whose solution requires fundamental research. At the same time, earthquake forecasting is a major tool for testing hypotheses and theories.

The book aims to introduce the reader to current understanding of algorithms applicable for modelling seismicity, it presents statistical analysis of seismicity as a point process in space, time and magnitude without presupposing that the reader possesses profound experience in research on these topics, but at the same time, without lessening the rigor of the reasoning or the variety of the chosen material.

It summarizes the state-of-the-art of the models’ application, explicitly focuses on the related verification procedures and appropriate tests, and finally provides the computer tools and examples of their use which are given for the reader’s ease in the application of the described models, written by scientists who have participated and contributed to the development of this research sector.

Although there are books on earthquake occurrence models worth reading by researchers and students, there is a gap in summarizing the most relevant statistical approaches from completely random earthquake occurrence up to renewal time–dependent models.

As can be seen from the contents list, statistical approaches and research results are presented in a logical and meaningful order, starting from examining the properties of an earthquake catalog up to seismic hazard assessment.

The book summarizes certain streams of seismological efforts that lead to a level of understanding of the seismicity and its processes.

It will be useful to scientists and researchers, students and lecturers dealing with statistical seismology. It can also be used to “teach yourself” by those who have little knowledge on the topics.

It is blatantly obvious that the stochastic approach brings optimism concerning efficiency in earthquake forecasting to those investigators dealing with relevant topics to strengthen their efforts, and the conviction that if they do so they can strongly contribute to its accomplishment.

Eleftheria Papadimitriou
Professor of Seismology
Geophysics Department
Aristotle University of Thessaloniki

Preface
Short- and Long-term Models of Earthquake Occurrence and their Validation

This book includes a review of the algorithms applicable for modeling seismicity, such as earthquake clustering, as in the ETAS short-term models, and pseudo-periodic behavior of major earthquakes. Examples of the application of such algorithms to real seismicity are also given.

In short-term models, earthquakes are regarded as the realization of a point process modeled by a generalized Poisson distribution. No hypothesis is inferred on the physical model of such a process. The occurrence of a seismic event within an infinitesimal volume of the space of its characterizing parameters is supposed to be completely random, while the behavior in a reasonably large volume of the same space may exhibit some average non-random features.

In the application of the statistical algorithms to seismicity, the Gutenberg–Richter law is assumed to describe the magnitude distribution of all the earthquakes in a sample, with a constant b value. The occurrence rate density of earthquakes in space and time is modeled as the sum of two terms, one representing the independent, or spontaneous, activity and the other representing the activity triggered by previous earthquakes. The first term depends only on space and is modeled by a continuous function of the geometrical coordinates, obtained by smoothing the discrete distribution of the past instrumental seismicity. The second term depends on both space and time, and it is factorized in two terms that depend on the space distance (according to an assigned spatial kernel) and on the time difference (according to the generalized Omori law), respectively, from the past earthquakes.

The description of seismicity as a point process in space, time and magnitude is suitable for the application of statistical tools for comparing the performance of different models. Here, the Bayes theorem is introduced, aiming at its application in computing the probability that a given model is true in light of the results of experimental observations. The use of the Bayes theorem requires the computation of the likelihood ratio for the two models to be compared. It follows that both hypotheses must be fully and quantitatively described.

Dealing with long-term recurrence models, the renewal time-dependent models, implying a pseudo-periodicity of earthquake occurrence, are compared with the simple time-independent Poisson model, in which every event occurs regardless of what had occurred in the past. In addition, in this case, the comparison is carried out through the concept of likelihood ratio of a set of observations under different models.

Chapter 1 deals with the application of the statistical tools to the seismicity of a region described by a seismic catalogue, starting with the extension to the continuum of the concepts suitable for earthquakes modeled as a continuous rate density function. The model introduced in this book (commonly known as the epidemic model) contains both a time-independent term, function of the space coordinates, and a time-dependent term, conditioned by the previous events in order to model the mutual interaction among earthquakes (short-term earthquake clustering).

Chapter 2 is devoted to a statistical background presented in an elementary way. It introduces the concept of the Bayes theorem, followed by the concepts of likelihood and likelihood ratio that are the tools for estimating and comparing the reliability of different hypotheses.

Chapter 3 develops the concepts introduced in Chapter 2 in the specific case of epidemic models.

Chapter 4 introduces a variety of verification procedures that have recently become popular to test the validity of forecasting models.

Chapter 5 reports examples of the application of epidemic models to real catalogues of recent seismic activity, namely the de-clustering of earthquakes catalogues and earthquake forecasting.

Chapter 6 is devoted to the problem of long-term earthquake occurrence, addressing the question of the validity of renewal models with memory when they are applied to seismicity, which has important implications with earthquake hazard assessment.

Chapter 7 contains a short description of a set of computer programs, each of which performs one of the steps necessary for processing a seismic catalogue. In this respect, most of the programs are linked together by their input and output making the whole set a sort of dedicated software package.

Rodolfo CONSOLE

Maura MURRU

Giuseppe FALCONE
May 2017