In this paper, we study a stochastic strongly convex optimization problem and propose three classes of variable sample-size stochastic first-order methods including the standard stochastic gradient descent method, its accelerated variant, and the stochastic heavy ball method. In the iterates of each scheme, the unavailable exact gradients are approximated by averaging across an increasing batch size of sampled gradients. We prove that when the sample-size increases geometrically, the generated estimates converge in mean to the optimal solution at a geometric rate. Based on this result, we provide a unified framework to show that the rescaled estimation errors converge in distribution to a normal distribution, in which the covariance matrix depends on the Hessian matrix, covariance of the gradient noise, and the steplength. If the sample-size increases at a polynomial rate, we show that the estimation errors decay at the corresponding polynomial rate and establish the corresponding central limit theorems (CLTs). Finally, we provide an avenue to construct confidence regions for the optimal solution based on the established CLTs, and test the theoretic findings on a stochastic parameter estimation problem.