When used in complex engineered systems, such as communication networks, artificial intelligence (AI) models should be not only as accurate as possible, but also well calibrated. A wellcalibrated AI model is one that can reliably quantify the uncertainty of its decisions, assigning high confidence levels to decisions that are likely to be correct, and low confidence levels to decisions that are likely to be erroneous. This paper investigates the application of conformal prediction as a general framework to obtain AI models that produce decisions with formal calibration guarantees. Conformal prediction transforms probabilistic predictors into set predictors that are guaranteed to contain the correct answer with a probability chosen by the designer. Such formal calibration guarantees hold irrespective of the true, unknown, distribution underlying the generation of the variables of interest, and can be defined in terms of ensemble or time-averaged probabilities. In this paper, conformal prediction is applied for the first time to the design of AI for communication systems in conjunction with both frequentist and Bayesian learning, focusing on the key tasks of demodulation, modulation classification, and channel prediction. For demodulation and modulation classification, we apply both validation-based and cross-validation-based conformal prediction; while we investigate the use of online conformal prediction for channel prediction. For each task, we evaluate the probability that the set predictor contains the true output, validating the theoretical coverage guarantees of conformal prediction, as well as the informativeness of the predictor via the average predicted set size.