Abstract. One of the most important studies of the earth sciences is that of the Earth's interior structure. There are many sources of data for Earth tomography models: first-arrival passive seismic data (from the actual earthquakes), first-arrival active seismic data (from the seismic experiments), gravity data, and surface waves. Currently, each of these datasets is processed separately, resulting in several different Earth models that have specific coverage areas, different spatial resolutions and varying degrees of accuracy. These models often provide complimentary geophysical information on earth structure (P and S wave velocity structure).Combining the information derived from each requires a joint inversion approach. Designing such joint inversion techniques presents an important theoretical and practical challenge. While such joint inversion methods are being developed, as a first step, we propose a practical solution: to fuse the Earth models coming from different datasets. Since these Earth models have different areas of coverage, model fusion is especially important because some of the resulting models provide better accuracy and/or spatial resolution in some spatial areas and in some depths while other models provide a better accuracy and/or spatial resolution in other areas or depths.The models used in this paper contain measurements that have not only different accuracy and coverage, but also different spatial resolution. We describe how to fuse such models under interval and probabilistic uncertainty.The resulting techniques can be used in other situations when we need to merge models of different accuracy and spatial resolution.
Abstract.To properly process data, we need to know the accuracy of different data points, i.e., accuracy of different measurement results and expert estimates. Often, this accuracy is not given. For such situations, we describe how this accuracy can be estimated based on the available data.
Formulation of the ProblemNeed to gauge accuracy. To properly process data, it is important to know the accuracy of different data values, i.e., the accuracy of different measurement results and expert estimates; see, e.g., [3][4][5]. In many cases, this accuracy information is available, but in many other practical situations, we do not have this information. In such situations, it is necessary to extract this accuracy information from the data itself.Extracting uncertainty from data: traditional approach. The usual way to gauge of the uncertainty of a measuring instrument is to compare the result x produced by this measuring instruments with the result x s of measuring the same quantity x by a much more accurate ("standard") measuring instrument.Since the "standard" measuring instrument is much more accurate than the instrument that we are trying to calibrate, we can safely ignore the inaccuracy of its measurements and take x s as a good approximation to the actual value x. In this case, the difference x − x s between the measurement results can serve as a good approximation to the desired measurement accuracy ∆x = x − x.Traditional approach cannot be applied for calibrating state-of-the-art measuring instruments. The above traditional approach works well for many measuring instruments. However, we cannot apply this approach for calibrating state-ofthe-art instrument, because these instruments are the best we have. There are no other instruments which are much more accurate than these ones -and which can therefore serve as standard measuring instruments for our calibration.Such situations are ubiquitous; for example:
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.