In statistical practice, a realistic Bayesian model for a given data set can be defined by a likelihood function that is analytically or computationally intractable, due to large data sample size, high parameter dimensionality, or complex likelihood functional form. This in turn poses challenges to the computation and inference of the posterior distribution of the model parameters. For such a model, a tractable likelihood function is introduced which approximates the exact likelihood through its quantile function. It is defined by an asymptotic chi-square confidence distribution for a pivotal quantity, which is generated by the asymptotic normal distribution of the sample quantiles given model parameters. This Quantile Implied Likelihood (QIL) gives rise to an approximate posterior distribution which can be estimated by using penalized log-likelihood maximization or any suitable Monte Carlo algorithm. The QIL approach to Bayesian Computation is illustrated through the Bayesian analysis of simulated and real data sets having sample sizes that reach the millions. The analyses involve various models for univariate or multivariate iid or non-iid data, with low or high parameter dimensionality, many of which are defined by intractable likelihoods. The probability models include the Student's t, g-and-h, and g-and-k distributions; the Bayesian logit regression model with many covariates; exponential random graph model, a doubly-intractable model for networks; the multivariate skew normal model, for robust inference of the inverse-covariance matrix when it is large relative to the sample size; and the Wallenius distribution model.