Seismology has entered the 1980s with increasing challenges and opportunities presented by advances in the technology for gathering and analysis pf digital data. These developments have resulted in: 1) a rapid increase in the amount of data routinely recorded; 2) a shift in data collection from primarily analog towards digital recording; and 3) the increasing application of advanced computer technology in seismological studies. There is clearly the potential to increase the scientific returns from seismic data of all types significantly, provided the data problems occasioned by these technological advances are overcome.
During the past four years there have been rapid advances in digital data storage and processing capability that are now making relatively sophisticated signal analysis simple to carry out even on microcomputers; minicomputers or mainframes that seismologists have used for many years are increasingly powerful and versatile. As a result of these developments high‐quality digital data sets are accessible in formats that are relatively easy to use, although standardized formats for data exchange are not universally agreed upon as yet. Unlike the situation only a few years ago, it is now feasible to analyze a large suite of seismic events in a typical study, rather than base conclusions or only a few events that can be digitized from analog paper or analog tape recordings. Moreover, the increased dynamic range afforded by modern digital systems (>120 db) allows one to study a large range of event magnitudes using the same source‐receiver combination.