Folding grid value vectors of size 2 L into Lth order tensors of mode size 2 × · · · × 2, combined with low-rank representation in the tensor train format, has been shown to result in highly efficient approximations for various classes of functions. These include solutions of elliptic PDEs on nonsmooth domains or with oscillatory data. This tensor-structured approach is attractive because it leads to highly compressed, adaptive approximations based on simple discretizations. Standard choices of the underlying bases, such as piecewise multilinear finite elements on uniform tensor product grids, entail the well-known matrix ill-conditioning of discrete operators. We demonstrate that, for low-rank representations, the use of tensor structure itself additionally introduces representation ill-conditioning, a new effect specific to computations in tensor networks. We analyze the tensor structure of a BPX preconditioner for a second-order linear elliptic operator and construct an explicit tensor-structured representation of the preconditioner, with ranks independent of the number L of discretization levels. The straightforward application of the preconditioner yields discrete operators whose matrix conditioning is uniform with respect to the discretization parameter, but in decompositions that suffer from representation ill-conditioning. By additionally eliminating certain redundancies in the representations of the preconditioned discrete operators, we obtain reduced-rank decompositions that are free of both matrix and representation ill-conditioning. For an iterative solver based on soft thresholding of low-rank tensors, we obtain convergence and complexity estimates and demonstrate its reliability and efficiency for discretizations with up to 2 50 nodes in each dimension.