A study on the application of the Tensor Train decomposition method to 3D direct numerical simulation data of channel turbulence flow is presented. The approach is validated with respect to compression rate and storage requirement. In tests with synthetic data, it is found that grid-aligned self-similar patterns are well captured, and also the application to non grid-aligned self-similarity yields satisfying results. It is observed that the shape of the input Tensor significantly affects the compression rate. Applied to data of channel turbulent flow, the Tensor Train format allows for surprisingly high compression rates whilst ensuring low relative errors.Keywords self-similarity; turbulent flows; Tensor Train format 1 IntroductionMultidimensional data sets, data of dimension 3 or higher, require massive storage capacity that strongly depends on the (Tensor) dimension, d, and on the number of entities per dimension, n. The datasize or storage requirement scales with O(n d ), n = max i {n i }. It is that curse of dimensionality that makes it difficult to handle higher-order Tensors or big data in an appropriate manner. Tensor product decomposition methods, [12], were originally developed to yield low-rank, i.e. data-sparse, representations or approximations of high-dimensional data in mathematical issues. It has been shown that those methods are as good as approximations by classical functions, e.g., like polynomials and wavelets, and that they allow very compact representations of large-data sets. Novel developments focus on hierarchical Tensor formats as e.g. the Tree-Tucker format, [10], and the Tensor Train format, [25], [11]. Nowadays, those hierarchical methods are successfully applied in e.g., physics or chemistry, where they are used in many body problems and quantum states.Here, we apply the Tensor Train decomposition to data of a 3D direct numerical simulation (DNS) of a turbulence channel flow. We aim at capturing self-similar structures that Th. von Larcher