In recent years, remote sensing (RS) research on crop growth status monitoring has gradually turned from static spectrum information retrieval in large-scale to meso-scale or micro-scale, timely multi-source data cooperative analysis; this change has presented higher requirements for RS data acquisition and analysis efficiency. How to implement rapid and stable massive RS data extraction and analysis becomes a serious problem. This paper reports on a Raster Dataset Clean & Reconstitution Multi-Grid (RDCRMG) architecture for remote sensing monitoring of vegetation dryness in which different types of raster datasets have been partitioned, organized and systematically applied. First, raster images have been subdivided into several independent blocks and distributed for storage in different data nodes by using the multi-grid as a consistent partition unit. Second, the “no metadata model” ideology has been referenced so that targets raster data can be speedily extracted by directly calculating the data storage path without retrieving metadata records; third, grids that cover the query range can be easily assessed. This assessment allows the query task to be easily split into several sub-tasks and executed in parallel by grouping these grids. Our RDCRMG-based change detection of the spectral reflectance information test and the data extraction efficiency comparative test shows that the RDCRMG is reliable for vegetation dryness monitoring with a slight reflectance information distortion and consistent percentage histograms. Furthermore, the RDCGMG-based data extraction in parallel circumstances has the advantages of high efficiency and excellent stability compared to that of the RDCGMG-based data extraction in serial circumstances and traditional data extraction. At last, an RDCRMG-based vegetation dryness monitoring platform (VDMP) has been constructed to apply RS data inversion in vegetation dryness monitoring. Through actual applications, the RDCRMG architecture is proven to be appropriate for timely vegetation dryness RS automatic monitoring with better performance, more reliability and higher extensibility. Our future works will focus on integrating more kinds of continuously updated RS data into the RDCRMG-based VDMP and integrating more multi-source datasets based collaborative analysis models for agricultural monitoring.
Soil moisture (SM) is an essential Earth surface and climate system variable. Insights into its persistence and corresponding time scales can improve numerical modeling and climate system prediction. In this study, SM from 17 observation stations within the Babao River Basin, Northwest China, between 1 June and 31 August 2014 were used to investigate persistence and corresponding time scales via adaptive fractal analysis. By conducting an adaptive fractal analysis of net radiation and by estimating the complementary cumulative distribution function of precipitation intervals, the relation between meteorological factors and the persistence and time scales of SM are determined and discussed. Results show that persistence and corresponding time scales of SM could be described using a three‐phase concept diagram. (1) At short time scales (approximately 0–14 hr [4 cm], 0–15 hr [10 cm], or 0–19 hr [20 cm]), the persistence of SM at most observation stations shows a strong long‐range correlation or nonstationarity. This phenomenon is essentially due to evaporation influenced by net radiation processes and the effects of rain. (2) At moderate time scales (approximately 14–159 hr [4 cm], 15–161 hr [10 cm], or 19–143 hr [20 cm]), the persistence of SM mostly exhibits a weak long‐range correlation or antipersistence due to net radiation process uncertainty and the probability of precipitation. (3) At long time scales (approximately greater than 159 hr [4 cm], 161 hr [10 cm], and 143 hr [20 cm]), the persistence of SM dynamics exhibits antipersistence because a high probability of precipitation reverses changes in the SM persistence.
Spatiotemporal fusion is considered a feasible and cost-effective way to solve the trade-off between the spatial and temporal resolution of satellite sensors. Recently proposed learning-based spatiotemporal fusion methods can address the prediction of both phenological and land-cover change. In this paper, we propose a novel deep learning-based spatiotemporal data fusion method that uses a two-stream convolutional neural network. The method combines both forward and backward prediction to generate a target fine image, where temporal change-based and a spatial information-based mapping are simultaneously formed, addressing the prediction of both phenological and land-cover changes with better generalization ability and robustness. Comparative experimental results for the test datasets with phenological and land-cover changes verified the effectiveness of our method. Compared to existing learning-based spatiotemporal fusion methods, our method is more effective in predicting phenological change and directly reconstructing the prediction with complete spatial details without the need for auxiliary modulation.
Spatiotemporal fusion (STF) is considered a feasible and cost-effective way to deal with the trade-off between the spatial and temporal resolution of satellite sensors, and to generate satellite images with high spatial and high temporal resolutions. This is achieved by fusing two types of satellite images, i.e., images with fine temporal but rough spatial resolution, and images with fine spatial but rough temporal resolution. Numerous STF methods have been proposed, however, it is still a challenge to predict both abrupt landcover change, and phenological change, accurately. Meanwhile, robustness to radiation differences between multi-source satellite images is crucial for the effective application of STF methods. Aiming to solve the abovementioned problems, in this paper we propose a hybrid deep learning-based STF method (HDLSFM). The method formulates a hybrid framework for robust fusion with phenological and landcover change information with minimal input requirements, and in which a nonlinear deep learning-based relative radiometric normalization, a deep learning-based superresolution, and a linear-based fusion are combined to address radiation differences between different types of satellite images, landcover, and phenological change prediction. Four comparative experiments using three popular STF methods, i.e., spatial and temporal adaptive reflectance fusion model (STARFM), flexible spatiotemporal data fusion (FSDAF), and Fit-FC, as benchmarks demonstrated the effectiveness of the HDLSFM in predicting phenological and landcover change. Meanwhile, HDLSFM is robust for radiation differences between different types of satellite images and the time interval between the prediction and base dates, which ensures its effectiveness in the generation of fused time-series data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.