Privacy-protecting data analysis investigates statistical methods under privacy constraints. This is a rising challenge in modern statistics, as the achievement of confidentiality guarantees, which typically occurs through suitable perturbations of the data, may determine a loss in the statistical utility of the data. In this paper, we consider privacy-protecting tests for goodness-of-fit in frequency tables, this being arguably the most common form of releasing data. Under the popular framework of (ε, δ)-differential privacy for perturbed data, we introduce a private likelihood-ratio test for goodnessof-fit and we study its large sample properties, showing the importance of taking the perturbation into account to avoid a loss in the statistical significance of the test. Our main contribution provides a quantitative characterization of the trade-off between confidentiality, measured via differential privacy parameters ε and δ, and utility, measured via the power of the test. In particular, we establish a precise Bahadur-Rao type large deviation expansion for the power of the private likelihood-ratio test, which leads to: i) identify a critical quantity, as a function of the sample size and (ε, δ), which determines a loss in the power of the private likelihood-ratio test; ii) quantify the sample cost of (ε, δ)-differential privacy in the private likelihood-ratio test, namely the additional sample size that is required to recover the power of the likelihood-ratio test in the absence of perturbation. Such a result relies on a novel multidimensional large deviation principle for sum of i.i.d. random vectors, which is of independent interest. Our work presents the first rigorous treatment of privacy-protecting likelihood-ratio tests for goodness-of-fit in frequency tables, making use of the power of the test to quantify the trade-off between confidentiality and utility.