This paper introduces a new method for compressing images in energy-starved systems, like satellites, unmanned aerial vehicles, and Internet of Things nodes, which is based on coordinated group signal transformation (CGST). The transformation algorithm is a type of difference coding and may be classified as a non-transform-based image-compression method. CGST simplifies the difference signal conversion scheme using a single group codec for all signals. It considers color channels as correlated signals of a multi-channel communication system. The performance of CGST was evaluated using a dataset of 128 × 128 pixel images from satellite remote sensing systems. To adapt CGST to image compression, some modifications were introduced to the algorithm, such as fixing the procedure of the difference signals calculation to prevent any “zeroing” of brightness and supplementing the group codec with a neural network to improve the quality of restored images. The following types of neural networks were considered: fully connected, recurrent, convolution, and convolution in the Fourier space. Based on the simulation results, fully connected neural networks are recommended if the goal is to minimize processing delay time. These networks have a response time of 13 ms. Conversely, suppose the priority is to improve quality in cases where delays are not critical. In that case, convolution neural networks in the Fourier space should be used, providing an image compression ratio of 4.8 with better minimum square error and Mikowsky norm values than JPEG with the same compression ratio.