Abstract. We consider the solution of a stochastic convex optimization problem E[f (x; θ * , ξ)] over a closed and convex set X in a regime where θ * is unavailable and ξ is a suitably defined random variable. Instead, θ * may be obtained through the solution of a learning problem that requires minimizing a metric E[g(θ; η)] in θ over a closed and convex set Θ. Traditional approaches have been either sequential or direct variational approaches. In the case of the former, this entails the following steps: (i) a solution to the learning problem, namely θ * , is obtained; and (ii) a solution is obtained to the associated computational problem which is parametrized by θ * . Such avenues prove difficult to adopt particularly since the learning process has to be terminated finitely and consequently, in large-scale instances, sequential approaches may often be corrupted by error. On the other hand, a variational approach requires that the problem may be recast as a possibly non-monotone stochastic variational inequality problem in the (x, θ) space; but there are no known first-order stochastic approximation schemes are currently available for the solution of this problem. To resolve the absence of convergent efficient schemes, we present a coupled stochastic approximation scheme which simultaneously solves both the computational and the learning problems. The obtained schemes are shown to be equipped with almost sure convergence properties in regimes when the function f is either strongly convex as well as merely convex. Importantly, the scheme displays the optimal rate for strongly convex problems while in merely convex regimes, through an averaging approach, we quantify the degradation associated with learning by noting that the error in function value after K steps is O ln(K)/K , rather than O 1/K when θ * is available. Notably, when the averaging window is modified suitably, it can be see that the originakl rate of O 1/K is recovered. Additionally, we consider an online counterpart of the misspecified optimization problem and provide a non-asymptotic bound on the average regret with respect to an offline counterpart. In the second part of the paper, we extend these statements to a class of stochastic variational inequality problems, an object that unifies stochastic convex optimization problems and a range of stochastic equilibrium problems. Analogous almost-sure convergence statements are provided in strongly monotone and merely monotone regimes, the latter facilitated by using an iterative Tikhonov regularization. In the merely monotone regime, under a weak-sharpness requirement, we quantify the degradation associated with learning and show that expected error associated with dist(x k , X * ) is O ln(K)/K . Preliminary numerics demonstrate the performance of the prescribed schemes.