Generative AI models have recently achieved mainstream attention with the advent of powerful approaches such as stable diffusion, DALL-E and MidJourney. The underlying breakthrough generative mechanism of denoising diffusion modeling can generate high quality synthetic images and can learn the underlying distribution of complex, high-dimensional data. Recent research has begun to extend these models to medical and specifically neuroimaging data. Typical neuroimaging tasks such as diagnostic classification and predictive modeling often rely on deep learning approaches based on convolutional neural networks (CNNs) and vision transformers (ViTs), with additional steps to help in interpreting the results. In our paper, we train conditional latent diffusion models (LDM) and denoising diffusion probabilistic models (DDPM) to provide insight into Alzheimer's disease (AD) effects on the brain's anatomy at the individual level. We first created diffusion models that could generate synthetic MRIs, by training them on real 3D T1-weighted MRI scans, and conditioning the generative process on the clinical diagnosis as a context variable. We conducted experiments to overcome limitations in training dataset size, compute time and memory resources, testing different model sizes, effects of pretraining, training duration, and latent diffusion models. We tested the sampling quality of the disease-conditioned diffusion using metrics to assess realism and diversity of the generated synthetic MRIs. We also evaluated the ability of diffusion models to conditionally sample MRI brains using a 3D CNN-based disease classifier relative to real MRIs. In our experiments, the diffusion models generated synthetic data that helped to train an AD classifier (using only 500 real training scans) - and boosted its performance by over 3% when tested on real MRI scans. Further, we used implicit classifier-free guidance to alter the conditioning of an encoded individual scan to its counterfactual (representing a healthy subject of the same age and sex) while preserving subject-specific image details. From this counterfactual image (where the same person appears healthy), a personalized disease map was generated to identify possible disease effects on the brain. Our approach efficiently generates realistic and diverse synthetic data, and may create interpretable AI-based maps for neuroscience research and clinical diagnostic applications.