Biological systems often have a narrow temperature range of operation, which require highly accurate spatially resolved temperature measurements, often near ±0.1K. However, many temperature sensors cannot meet both accuracy and spatial distribution requirements, often because their accuracy is limited by data fitting and temperature reconstruction models. Machine learning algorithms have the potential to meet this need, but their usage in generating spatial distributions of temperature is severely lacking in the literature. This work presents the first instance of using neural networks to process fluorescent images to map the spatial distribution of temperature. Three standard network architectures were investigated using non-spatially resolved fluorescent thermometry (simply-connected feed-forward network) or during image or pixel identification (U-net and convolutional neural network, CNN). Simulated fluorescent images based on experimental data were generated based on known temperature distributions where Gaussian white noise with a standard deviation of ±0.1 K was added. The poor results from these standard networks motivated the creation of what is termed a moving CNN, with an RMSE error of ±0.23 K, where the elements of the matrix represent the neighboring pixels. Finally, the performance of this MCNN is investigated when trained and applied to three distinctive temperature distributions characteristic within microfluidic devices, where the fluorescent image is simulated at either three or five different wavelengths. The results demonstrate that having a minimum of 103.5 data points per temperature and the broadest range of temperatures during training provides temperature predictions nearest to the true temperatures of the images, with a minimum RMSE of ±0.15 K. When compared to traditional curve fitting techniques, this work demonstrates that greater accuracy when spatially mapping temperature from fluorescent images can be achieved when using convolutional neural networks.