We apply a stochastic sequential quadratic programming (StoSQP) algorithm to solve constrained nonlinear optimization problems, where the objective is stochastic and the constraints are in equality and deterministic. We study a fully stochastic setup, where only a single sample is available in each iteration for estimating the gradient and Hessian of the objective. We allow StoSQP to select a random stepsize ᾱt adaptively, such that β t ≤ ᾱt ≤ β t + χ t , where β t , χ t = o(β t ) are prespecified deterministic sequences. We also allow StoSQP to solve Newton system inexactly via randomized iterative solvers, e.g., with the sketch-and-project method; and we do not require the approximation error of inexact Newton direction to vanish (thus, the per-iteration computational cost does not blow up). For this general StoSQP framework, we establish the asymptotic convergence rate for its last iterate, with the worst-case iteration complexity as a byproduct; and we perform statistical inference. In particular, under mild assumptions and with proper decaying sequences β t , χ t , we show that: (i) the StoSQP scheme can take at most O(1/ 4 ) iterations to achieve -stationarity; (ii) asymptotically and almost surely, (x t −x , λ t −λ ) = O( β t log(1/β t ))+O(χ t /β t ), where (x t , λ t ) is the primaldual StoSQP iterate; (iii) the sequence 1/ √ β t • (x t − x , λ t − λ ) converges to a mean zero Gaussian distribution with a nontrivial covariance matrix. Furthermore, we establish the Berry-Esseen bound for (x t , λ t ) to measure quantitatively the convergence of its distribution function. We also provide a practical estimator for the covariance matrix, from which the confidence intervals (or regions) of (x , λ ) can be constructed using the iterates {(x t , λ t )} t . All our theorems are validated using nonlinear problems in CUTEst test set.