Masking of clouds, cloud shadow, water and snow/ice in optical satellite imagery is an important step in automated processing chains. We compare the performance of the masking provided by Fmask (“Function of mask” implemented in FORCE), ATCOR (“Atmospheric Correction”) and Sen2Cor (“Sentinel-2 Correction”) on a set of 20 Sentinel-2 scenes distributed over the globe covering a wide variety of environments and climates. All three methods use rules based on physical properties (Top of Atmosphere Reflectance, TOA) to separate clear pixels from potential cloud pixels, but they use different rules and class-specific thresholds. The methods can yield different results because of different definitions of the dilation buffer size for the classes cloud, cloud shadow and snow. Classification results are compared to the assessment of an expert human interpreter using at least 50 polygons per class randomly selected for each image. The class assignment of the human interpreter is considered as reference or “truth”. The interpreter carefully assigned a class label based on the visual assessment of the true color and infrared false color images and additionally on the bottom of atmosphere (BOA) reflectance spectra. The most important part of the comparison is done for the difference area of the three classifications considered. This is the part of the classification images where the results of Fmask, ATCOR and Sen2Cor disagree. Results on difference area have the advantage to show more clearly the strengths and weaknesses of a classification than results on the complete image. The overall accuracy of Fmask, ATCOR, and Sen2Cor for difference areas of the selected scenes is 45%, 56%, and 62%, respectively. User and producer accuracies are strongly class- and scene-dependent, typically varying between 30% and 90%. Comparison of the difference area is complemented by looking for the results in the area where all three classifications give the same result. Overall accuracy for that “same area” is 97% resulting in the complete classification in overall accuracy of 89%, 91% and 92% for Fmask, ATCOR and Sen2Cor respectively.
The atmospheric correction of satellite images based on radiative transfer calculations is a prerequisite for many remote sensing applications. The software package ATCOR, developed at the German Aerospace Center (DLR), is a versatile atmospheric correction software, capable of processing data acquired by many different optical satellite sensors. Based on this well established algorithm, a new Python-based atmospheric correction software has been developed to generate L2A products of Sentinel-2, Landsat-8, and of new space-based hyperspectral sensors such as DESIS (DLR Earth Sensing Imaging Spectrometer) and EnMAP (Environmental Mapping and Analysis Program). This paper outlines the underlying algorithms of PACO, and presents the validation results by comparing L2A products generated from Sentinel-2 L1C images with in situ (AERONET and RadCalNet) data within VNIR-SWIR spectral wavelengths range.
The masking of cloud shadows in optical satellite imagery is an important step in automated processing chains. A new method (the TIP method) for cloud shadow detection in multi-spectral satellite images is presented and compared to current methods. The TIP method is based on the evaluation of thresholds, indices and projections. Most state-of-the-art methods solemnly rely on one of these evaluation steps or on a complex working mechanism. Instead, the new method incorporates three basic evaluation steps into one algorithm for easy and accurate cloud shadow detection. Furthermore the performance of the masking algorithms provided by the software packages ATCOR (“Atmospheric Correction”) and PACO (“Python-based Atmospheric Correction”) is compared with that of the newly implemented TIP method on a set of 20 Sentinel-2 scenes distributed over the globe, covering a wide variety of environments and climates. The algorithms incorporated in each piece of masking software use the class of cloud shadows, but they employ different rules and class-specific thresholds. Classification results are compared to the assessment of an expert human interpreter. The class assignment of the human interpreter is considered as reference or “truth”. The overall accuracies for the class cloud shadows of ATCOR and PACO (including TIP) for difference areas of the selected scenes are 70.4% and 76.6% respectively. The difference area encompasses the parts of the classification image where the classification maps disagree. User and producer accuracies for the class cloud shadow are strongly scene-dependent, typically varying between 45% and 95%. The experimental results show that the proposed TIP method based on thresholds, indices and projections can obtain improved cloud shadow detection performance.
The authors wish to make the following corrections to this paper [...]
ABSTRACT:Cirrus is one of the most common artifacts in the remotely sensed optical data. Contrary to the low altitude (1-3 km) cloud the cirrus cloud (8-20 km) is semitransparent and the extinction (cirrus influence) of the upward reflected solar radiance can be compensated. The widely employed and almost 'de-facto' method for cirrus compensation is based on the 1.38µm spectral channel measuring the upwelling radiance reflected by the cirrus cloud. The knowledge on the cirrus spatial distribution allows to estimate the per spectral channel cirrus attenuation and to compensate the spectral channels. A wide range of existing and expected sensors have no 1.38µm spectral channel. These sensors data can be corrected by the recently developed haze/cirrus removal method. The additive model of the estimated cirrus thickness map (CTM) is applicable for cirrus-conditioned extinction compensation. Numeric and statistic evaluation of the CTM-based cirrus removal on more than 80 Landsat-8 OLI and 30 Sentinel-2 scenes demonstrates a close agreement with the 1.38µm channel based cirrus removal.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.