We study multivariate integration and approximation for functions belonging to a weighted reproducing kernel Hilbert space based on half-period cosine functions in the worst-case setting. The weights in the norm of the function space depend on two sequences of real numbers and decay exponentially. As a consequence the functions are infinitely often differentiable, and therefore it is natural to expect exponential convergence of the worst-case error. We give conditions on the weight sequences under which we have exponential convergence for the integration as well as the approximation problem. Furthermore, we investigate the dependence of the errors on the dimension by considering various notions of tractability. We prove sufficient and necessary conditions to achieve these tractability notions.Keywords: numerical integration, function approximation, cosine space, worst-case error, exponential convergence, tractability 2010 MSC: 41A63, 41A25, 65C05, 65D30, 65Y20Without loss of generality, see, e.g., [15] or [12, Section 4], we approximate S s by a linear algorithm A n,s using n information evaluations which are given by linear functionals from the class Λ ∈ {Λ all , Λ std }. More precisely, we approximate S s by algorithms of the formObviously, for multivariate integration only the class Λ std makes sense. Furthermore, we remark that in this paper we consider only function spaces for which Λ std ⊂ Λ all .We measure the error of an algorithm A n,s in terms of the worst-case error, which is defined aswhere · Hs , · Gs denote the norms in H s and G s , respectively. The nth minimal (worstcase) error is given bywhere the infimum is taken over all admissible algorithms A n,s . When we want to emphasize that the nth minimal error is taken with respect to algorithms using information from the class Λ ∈ {Λ all , Λ std }, we write e(n, S s ; Λ). For n = 0, we consider algorithms that do not use information evaluations and therefore we use A 0,s ≡ 0. The error of A 0,s is called the initial (worst-case) error and is given bySince we will study a class of weighted reproducing kernel Hilbert spaces with exponentially decaying weights, which will be introduced in Section 1.2, we are concerned with spaces H s of smooth functions. We remark that reproducing kernel Hilbert spaces of a similar flavor were previously considered in [2,3,5,6,7,8,9]. In this case it is natural to expect that, by using suitable algorithms, we should be able to obtain errors that converge to zero very quickly as n increases, namely exponentially fast. By exponential convergence (EXP) for the worst-case error we mean that there exist a number q ∈ (0, 1) and functions p, C, M : N → (0, ∞) such that e(n, S s ) ≤ C(s) q (n/M (s)) p (s) for all s, n ∈ N.(1)If the function p in (1) can be taken as a constant function, i.e., p(s) = p > 0 for all s ∈ N, we say that we achieve uniform exponential convergence (UEXP) for e(n, S s ). Furthermore, we denote by p * (s) and p * the largest possible rates p(s) and p such that EXP and UEXP holds, respectively. When stu...