Given a time‐varying scalar field, the analysis of correlations between different spatial regions, i.e., the linear dependence of time series within these regions, provides insights into the structural properties of the data. In this context, regions are connected components of the spatial domain with high time series correlations. The detection and analysis of such regions is often performed globally, which requires pairwise correlation computations that are quadratic in the number of spatial data samples. Thus, operations based on all pairwise correlations are computationally demanding, especially when dealing with ensembles that model the uncertainty in the spatio‐temporal phenomena using multiple simulation runs. We propose a two‐step procedure: In a first step, we map the spatial samples to a 3D embedding based on a pairwise correlation matrix computed from the ensemble of time series. The 3D embedding allows for a one‐to‐one mapping to a 3D color space such that the outcome can be visually investigated by rendering the colors for all samples in the spatial domain. In a second step, we generate a hierarchical image segmentation based on the color images. From then on, we can visually analyze correlations of regions at all levels in the hierarchy within an interactive setting, which includes the uncertainty‐aware analysis of the region's time series correlation and respective time lags.
Overfitting is one of the fundamental challenges when training convolutional neural networks and is usually identified by a diverging training and test loss. The underlying dynamics of how the flow of activations induce overfitting is however poorly understood. In this study we introduce a perplexity-based sparsity definition to derive and visualise layer-wise activation measures. These novel explainable AI strategies reveal a surprising relationship between activation sparsity and overfitting, namely an increase in sparsity in the feature extraction layers shortly before the test loss starts rising. This tendency is preserved across network architectures and reguralisation strategies so that our measures can be used as a reliable indicator for overfitting while decoupling the network's generalisation capabilities from its loss-based definition. Moreover, our differentiable sparsity formulation can be used to explicitly penalise the emergence of sparsity during training so that the impact of reduced sparsity on overfitting can be studied in real-time. Applying this penalty and analysing activation sparsity for well known regularisers and in common network architectures supports the hypothesis that reduced activation sparsity can effectively improve the generalisation and classification performance. In line with other recent work on this topic, our methods reveal novel insights into the contradicting concepts of activation sparsity and network capacity by demonstrating that dense activations can enable discriminative feature learning while efficiently exploiting the capacity of deep models without suffering from overfitting, even when trained excessively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.