Using convolutional neural networks (CNNs) in classifying hyperspectral images (HSIs) has achieved quite good results in recent years. It is widely used in agricultural remote sensing, geological exploration, environmental monitoring, and marine remote sensing. Unfortunately, the complexity of network structures used for hyperspectral image classification challenges the efficient delivery of HSI data extremely, and existing methods suffer from a large amount of redundancy in the network weight parameters during training, as they either require huge computational resources or make inefficient use of storage space when designing the network structure, and many of the parameters that waste computational resources contribute less to the rich spectral and spatial information transfer in HSI. So we introduce LCTCS, a better low-memory and less-parametric network approach. LCTCS aims to improve the efficiency of computational resource utilization with advanced classification performance and lower levels of computational resources. Unlike the conventional 2D and 3D convolution used previously, we use simple and efficient 3D grouped convolution as a vehicle to convey the semantic features of HSIs. More specifically, we design a novel two-channel sparse network to classify HSIs since grouped 3D convolution conveys the properties of hyperspectral data well in the time and space domains.We have compared LCTCS with eight widely used network methods on four publicly available hyperspectral datasets for learning HSI information. A series of experiments shows that the model architecture designed has $$65.89 \%$$
65.89
%
less storage space than the DBDA method, consumes $$67.36 \%$$
67.36
%
fewer computational resources than the SSRN method on the IP dataset, and accomplishes a highly accurate classification task with the number of parameters accounting for only $$1.99 \%$$
1.99
%
that of the DBMA method.