We study the complexity of high-dimensional approximation in the L2-norm when different classes of information are available; we compare the power of function evaluations with the power of arbitrary continuous linear measurements. Here, we discuss the situation when the number of linear measurements required to achieve an error ε ∈ (0,1) in dimension $d\in \mathbb {N}$
d
∈
ℕ
depends only poly-logarithmically on ε− 1. This corresponds to an exponential order of convergence of the approximation error, which often happens in applications. However, it does not mean that the high-dimensional approximation problem is easy, the main difficulty usually lies within the dependence on the dimension d. We determine to which extent the required amount of information changes if we allow only function evaluation instead of arbitrary linear information. It turns out that in this case we only lose very little, and we can even restrict to linear algorithms. In particular, several notions of tractability hold simultaneously for both types of available information.