Autofluorescence microscopy uses intrinsic sources of molecular contrast to provide cellular-level information without extrinsic labels. However, traditional cell segmentation tools are often optimized for high signal-to-noise ratio (SNR) images, such as fluorescently labeled cells, and unsurprisingly perform poorly on low SNR autofluorescence images. Therefore, new cell segmentation tools are needed for autofluorescence microscopy. Cellpose is a deep learning network that is generalizable across diverse cell microscopy images and automatically segments single cells to improve throughput and reduce inter-human biases. This study aims to validate Cellpose for autofluorescence imaging, specifically from multiphoton intensity images of NAD(P)H. Manually segmented nuclear masks of NAD(P)H images were used to train new Cellpose models. These models were applied to PANC-1 cells treated with metabolic inhibitors and patient-derived cancer organoids (across 9 patients) treated with chemotherapies. These datasets include co-registered fluorescence lifetime imaging microscopy (FLIM) of NAD(P)H and FAD, so fluorescence decay parameters and the optical redox ratio (ORR) were compared between masks generated by the new Cellpose model and manual segmentation. The Dice score between repeated manually segmented masks was significantly lower than that of repeated Cellpose masks (p<0.0001) indicating greater reproducibility between Cellpose masks. There was also a high correlation (R2>0.9) between Cellpose and manually segmented masks for the ORR, mean NAD(P)H lifetime, and mean FAD lifetime across 2D and 3D cell culture treatment conditions. Masks generated from Cellpose and manual segmentation also maintain similar means, variances, and effect sizes between treatments for the ORR and FLIM parameters. Overall, Cellpose provides a fast, reliable, reproducible, and accurate method to segment single cells in autofluorescence microscopy images such that functional changes in cells are accurately captured in both 2D and 3D culture.