The sheer size of many water systems challenges the ability of in situ sensor networks to resolve spatiotemporal variability of hydrologic processes. New sources of vastly distributed and mobile measurements are, however, emerging to potentially fill these observational gaps. This paper poses the question: How can nontraditional measurements, such as those made by volunteer ship captains, be used to improve hydrometeorological estimates across large surface water systems? We answer this question through the analysis of one of the largest such data sets: an unprecedented collection of one million unique measurements made by ships on the North American Great Lakes from 2006 to 2014. We introduce a flexible probabilistic framework, which can be used to integrate ship measurements, or other sets of irregular point measurements, into contiguous data sets. The performance of this framework is validated through the development of a new ship‐based spatial data product of water temperature, air temperature, and wind speed across the Great Lakes. An analysis of the final data product suggests that the availability of measurements across the Great Lakes will continue to play a large role in the confidence with which these large surface water systems can be studied and modeled. We discuss how this general and flexible approach can be applied to similar data sets, and how it will be of use to those seeking to merge large collections of measurements with other sources of data, such as physical models or remotely sensed products.
There has been an explosive growth in the ability to model large water systems. While these models are effective at routing water across massive scales, they do not yet forecast the street‐level information desired by local decision makers. Simultaneously, the increasing affordability of sensors has made it possible for even small communities to measure the state of their watersheds. However, these real‐time measurements are often not attached to a predictive model, thus making them less useful for applications like flood warnings. In this paper, we ask the question: how can highly localized forecasts be generated by fusing site‐scale sensor measurements with outputs from large‐scale models? Rather than altering the larger physical model, our approach uses the outputs of the unmodified model as the inputs to a dynamical system. To evaluate the approach, a case study is carried out across the U.S. state of Iowa using publicly available measurements from over 180 water level sensors and outputs from the National Water Model. The approach performs well across a third of the studied sites, as quantified by a high normalized root mean squared error. A performance classification is carried out based on Principal Component Analysis and Random Forests. We discuss how these results will enable stakeholders with local measurements to quickly benefit from large‐scale models without needing to run or modify the models themselves. The results are also placed into a broader sensor‐placement context to provide guidance on how investments into local measurements can be made to maximize predictive benefits.
ABSTRACT:The multitude of available operational remote sensing satellites led to the development of many image fusion techniques to provide high spatial, spectral and temporal resolution images. The comparison of different techniques is necessary to obtain an optimized image for the different applications of remote sensing. There are two approaches in assessing image quality: 1. Quantitatively by visual interpretation and 2. Quantitatively using image quality indices. However an objective comparison is difficult due to the fact that a visual assessment is always subject and a quantitative assessment is done by different criteria. Depending on the criteria and indices the result varies. Therefore it is necessary to standardize both processes (qualitative and quantitative assessment) in order to allow an objective image fusion quality evaluation. Various studies have been conducted at the University of Osnabrueck (UOS) to establish a standardized process to objectively compare fused image quality. First established image fusion quality assessment protocols, i.e. Quality with No Reference (QNR) and Khan's protocol, were compared on varies fusion experiments. Second the process of visual quality assessment was structured and standardized with the aim to provide an evaluation protocol. This manuscript reports on the results of the comparison and provides recommendations for future research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.