Very often, in the course of uncertainty quantification tasks or data analysis, one has to deal with high‐dimensional random variables. Here the interest is mainly to compute characterizations like the entropy, the Kullback–Leibler divergence, more general f$$ f $$‐divergences, or other such characteristics based on the probability density. The density is often not available directly, and it is a computational challenge to just represent it in a numerically feasible fashion in case the dimension is even moderately large. It is an even stronger numerical challenge to then actually compute said characteristics in the high‐dimensional case. In this regard it is proposed to approximate the discretized density in a compressed form, in particular by a low‐rank tensor. This can alternatively be obtained from the corresponding probability characteristic function, or more general representations of the underlying random variable. The mentioned characterizations need point‐wise functions like the logarithm. This normally rather trivial task becomes computationally difficult when the density is approximated in a compressed resp. low‐rank tensor format, as the point values are not directly accessible. The computations become possible by considering the compressed data as an element of an associative, commutative algebra with an inner product, and using matrix algorithms to accomplish the mentioned tasks. The representation as a low‐rank element of a high order tensor space allows to reduce the computational complexity and storage cost from exponential in the dimension to almost linear.