Amongst the many benefits of remote sensing techniques in disaster-or conflict-related applications, timeliness and objectivity may be the most critical assets. Recently, increasing sensor quality and data availability have shifted the attention more towards the information extraction process itself. With promising results obtained by deep learning (DL), the notion arises that DL is not agnostic to input errors or biases introduced, in particular in samplescarce situations. The present work seeks to understand the influence of different sample quality aspects propagating through network layers in automated image analysis. In this paper, we broadly discuss the conceptualisation of such a sample database in an early stage of realisation: (1) inherited properties (quality parameters of the underlying image such as cloud cover, seasonality, etc.); (2) individual (i.e., per-sample) properties, including a. lineage and provenance, b. geometric properties (size, orientation, shape), c. spectral features (standardized colour code); (3) context-related properties (arrangement Several hundred samples collected from different camp settings were hand-selected and annotated with computed features in an initial stage. The supervised annotation routine is automated so that thousands of existing samples can be labelled with this extended feature set. This should better condition the subsequent DL tasks in a hybrid AI approach.