“…We adopted the exact same implementation that was benchmarked in the CloudSEN12 paper [2], with the only difference being that in the paper, L1C imagery was used (which is often not useful in practical use-cases). In detail, this means we trained the UNet with Mobilenetv2 encoder using the Segmentation Models PyTorch Python library 2 . We used a batch size of 32, random horizontal and vertical flipping, random 90 degree rotations, random mirroring, unweighted cross entropy loss, early stopping with a patience of 10 epochs, AdamW optimizer, learning rate of 1e −3 , and a learning rate schedule reducing the learning rate by a factor of 10 if validation loss did not decrease for 4 epochs.…”