Tensor decompositions, in particular the Tucker model, are a powerful family of techniques for dimensionality reduction and are being increasingly used for compactly encoding large multidimensional arrays, images and other visual data sets. In interactive applications, volume data often needs to be decompressed and manipulated dynamically; when designing data reduction and reconstruction methods, several parameters must be taken into account, such as the achievable compression ratio, approximation error and reconstruction speed. Weighing these variables in an effective way is challenging, and here we present two main contributions to solve this issue for Tucker tensor decompositions. First, we provide algorithms to efficiently compute, store and retrieve good choices of tensor rank selection and decompression parameters in order to optimize memory usage, approximation quality and computational costs. Second, we propose a Tucker compression alternative based on coefficient thresholding and zigzag traversal, followed by logarithmic quantization on both the transformed tensor core and its factor matrices. In terms of approximation accuracy, this approach is theoretically and empirically better than the commonly used tensor rank truncation method. Abstract Tensor decompositions, in particular the Tucker model, are a powerful family of techniques for dimensionality reduction and are being increasingly used for compactly encoding large multidimensional arrays, images and other visual data sets. In interactive applications, volume data often needs to be decompressed and manipulated dynamically; when designing data reduction and reconstruction methods, several parameters must be taken into account, such as the achievable compression ratio, approximation error and reconstruction speed. Weighing these variables in an effective way is challenging, and here we present two main contributions to solve this issue for Tucker tensor decompositions. First, we provide algorithms to efficiently compute, store and retrieve good choices of tensor rank selection and decompression parameters in order to optimize memory usage, approximation quality and computational costs. Second, we propose a Tucker compression alternative based on coefficient thresholding and zigzag traversal, followed by logarithmic quantization on both the transformed tensor core and its factor matrices. In terms of approximation accuracy, this approach is theoretically and empirically better than the commonly used tensor rank truncation method.