Deployment of a deep convolutional neural network on a low compute device is an increasingly important area of research. Wearable systems, drones, and robots use semantic information to manipulate, control, and estimate variables related to efficient navigation or gain of contextual information. However, real-time semantic segmentation is challenging on low compute devices. We propose a compact convolutional neural network for real-time applications on a low compute device. Our decoder uses pixel shuffling to achieve efficient inferences. We compared our CNN with state-of-the-art models ranked on the Cityscapes real-time semantic segmentation category. We propose a modified Net score that includes frame per seconds as an additional metric along with traditional metrics mIoU, GFLOPs, and the number of parameters to evaluate mobile computing performance. Our CNN achieved 65.7 FPS, 76.7% mIoU without ImageNet pre-training, while requiring 25 GFLOPs and 4.55M parameters, resulting in a 127.53 on the modified Net score compared to 119.89 for DDRNET23_slim and 115.39 for Regseg. In addition to Cityscapes results, our CNN performance (83.3% mIoU and 354 FPS with TensorRT) was superior to published mIoU values for Regseg and other CNNs on Camvid test set. This performance achieves the state-of-the-art. To demonstrate compatibility with low compute devices, we evaluated our CNN on two mobile computing platforms and showed real-time performance (57 fps) on Jetson NX 8 GB with TensorRT and 12.65 fps on Jetson Xavier AGX without TensorRT. Our CNN can operate with high accuracy on low compute devices to support system which benefit from semantic information. INDEX TERMS Semantic segmentation, hardware-constrained, low compute devices, real-time performance, deep convolutional neural network, Jetson Xavier AGX, Jetson Nx.